<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>eb74277e-4ee</externalid>
      <Title>Policy Design Manager, Age-Appropriate Design</Title>
      <Description><![CDATA[<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>
<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>
<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Serve as an internal subject matter expert, leveraging deep expertise in child safety, adult content, youth development, and age-appropriate design to:</li>
</ul>
<ul>
<li>Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases</li>
</ul>
<ul>
<li>Design evaluation frameworks for testing model performance in areas of expertise</li>
</ul>
<ul>
<li>Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities</li>
</ul>
<ul>
<li>Review flagged content to drive enforcement and policy improvements</li>
</ul>
<ul>
<li>Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review</li>
</ul>
<ul>
<li>Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions for users across different age groups</li>
</ul>
<ul>
<li>Advise on age assurance approaches and content classification frameworks in partnership with Enforcement, Product, Engineering, and Legal teams</li>
</ul>
<ul>
<li>Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s)</li>
</ul>
<ul>
<li>Keep up to date with new and existing AI policy norms, regulatory requirements (e.g., age-appropriate design codes), and industry standards, and use these to inform our decision-making on policy areas</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</li>
</ul>
<ul>
<li>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions</li>
</ul>
<ul>
<li>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems</li>
</ul>
<ul>
<li>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development</li>
</ul>
<ul>
<li>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams</li>
</ul>
<ul>
<li>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space</li>
</ul>
<ul>
<li>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations</li>
</ul>
<ul>
<li>Navigating and prioritizing work efforts amidst ambiguity</li>
</ul>
<p><strong>Salary:</strong></p>
<p>$245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI products, classifier development, product policy decisions, content moderation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156326008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86d4c902-c89</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e03253e3-c7f</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p><strong>Compensation:</strong></p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>74c9dcaa-dfd</externalid>
      <Title>Policy Design Manager, Age-Appropriate Design</Title>
      <Description><![CDATA[<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>
<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>
<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>
<p>You may be a good fit if you have experience:</p>
<p>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</p>
<p>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions.</p>
<p>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems.</p>
<p>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development.</p>
<p>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams.</p>
<p>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space.</p>
<p>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations.</p>
<p>Navigating and prioritizing work efforts amidst ambiguity.</p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI, classifier development, policy evaluations, product policy decisions, content moderation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156326008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f578503a-af9</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>
<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>
<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>
<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>
<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$43.75 - $62.50 USD hourly</Salaryrange>
      <Skills>Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097904007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f818897-404</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Process appeals, audit automations, and properly label use cases in the system.</li>
<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>
<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
<li>Ability to interpret and apply xAI safety policies effectively.</li>
<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>
<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>
<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>
<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>
<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>
</ul>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097907007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b51f695-3f8</externalid>
      <Title>Trust &amp; Safety Operations Analyst</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Trust &amp; Safety Operations Analyst</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p><strong>Compensation</strong></p>
<ul>
<li>$189K – $280K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>At OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.</p>
<p>We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We are seeking experienced, senior-level analysts who specialize in one or more of the following areas:</p>
<ul>
<li><strong>Content Integrity &amp; Scaled Enforcement</strong> – Detecting, reviewing, and acting on policy violations, harmful content, and emerging abuse patterns at scale.</li>
</ul>
<ul>
<li><strong>Emerging Risk Operations</strong> – Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape.</li>
</ul>
<p>In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations.</p>
<p>We use a hybrid work model of 3 days in the San Francisco office per week and offer relocation assistance to new employees.</p>
<p>Please note: This role may involve exposure to sensitive content, including material that is sexual, violent, or otherwise disturbing.</p>
<p><strong>In This Role, You Will:</strong></p>
<ul>
<li>Handle and resolve high-priority cases in your area of specialization (scaled content enforcement, fraud/scams, privacy/regulatory, or emerging risks).</li>
</ul>
<ul>
<li>Perform in-depth risk evaluations and investigations using internal tools, product signals, and third-party data.</li>
</ul>
<ul>
<li>Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation.</li>
</ul>
<ul>
<li>Partner with cross-functional teams to design and implement world-class operational workflows, decision trees, and automation strategies.</li>
</ul>
<ul>
<li>Build feedback loops from casework to inform product, engineering, and policy improvements.</li>
</ul>
<ul>
<li>Develop and maintain playbooks, SOPs, macros, and knowledge resources for internal teams and vendors.</li>
</ul>
<ul>
<li>Lead or contribute to cross-functional projects, from zero-to-one process builds to global operational scale-ups.</li>
</ul>
<ul>
<li>Monitor operational health through case quality audits, SLA adherence, escalation accuracy, and user satisfaction metrics.</li>
</ul>
<ul>
<li>Train and support vendor teams, ensuring consistent quality and alignment with OpenAI’s trust and safety standards.</li>
</ul>
<p><strong>You Might Thrive in This Role If You:</strong></p>
<ul>
<li>Have 5+ years of experience in one or more of: trust &amp; safety, fraud prevention, scam investigation, privacy/legal operations, compliance, or other risk/integrity domains ideally in a global or high-growth tech environment.</li>
</ul>
<ul>
<li>Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact.</li>
</ul>
<ul>
<li>Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks.</li>
</ul>
<ul>
<li>Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes.</li>
</ul>
<ul>
<li>Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence.</li>
</ul>
<ul>
<li>Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts.</li>
</ul>
<ul>
<li>Thrive in ambiguous, high-autonomy environments and balance speed with diligence.</li>
</ul>
<p>Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact.</p>
<p><strong><strong>What We Offer</strong></strong></p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive benefits, including medical, dental, and vision insurance</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave and medical and caregiver leave</li>
</ul>
<ul>
<li>Flexible PTO and paid company holidays</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Annual learning and development stipend</li>
</ul>
<ul>
<li>Daily meals in our offices and meal delivery credits</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends</li>
</ul>
<p><strong><strong>How to Apply</strong></strong></p>
<p>If you are a motivated and experienced professional looking to join a dynamic team, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$189K – $280K</Salaryrange>
      <Skills>trust &amp; safety, fraud prevention, scam investigation, privacy/legal operations, compliance, risk/integrity domains, OpenAI technology, workflow management, decision-making, operational impact, domain expertise, legal, policy, technical frameworks, analytical skills, pattern detection, risk assessment, policy/product changes, communication, clarity, empathy, precision, user-facing contexts, ambiguous environments, high-autonomy, speed, diligence, context switching, project management, prioritization, ChatGPT, API, enterprise offerings, developer tools, sensitive content, sexual, violent, disturbing, hybrid work model, relocation assistance, vendor management, quality control, alignment, OpenAI’s trust and safety standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in developing and commercializing artificial intelligence (AI) systems. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/eb54b316-26fb-498f-a68c-9990ff9c402c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>