<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>b7959209-0c2</externalid>
      <Title>Safeguards Policy Analyst, Fraud &amp; Scams</Title>
      <Description><![CDATA[<p>As a Safeguards Policy Analyst focused on Fraud &amp; Scams, you will design, build, and execute enforcement workflows that detect and mitigate fraud and scam-related harms on Anthropic&#39;s products.</p>
<p>This role sits within the Integrity &amp; Authenticity (I&amp;A) team, where you will function both as a policy owner and work closely with threat investigative and enforcement teams.</p>
<p>Key responsibilities include drafting, maintaining, and iterating on Fraud &amp; Scams policies; conducting regular structured policy reviews; developing detailed threat models for fraud and scam vectors; and staying current on the fraud and scam landscape.</p>
<p>You will also design and architect automated enforcement systems and human review workflows that scale effectively while maintaining high precision and recall.</p>
<p>Additionally, you will serve as the primary policy point of contact for ML and Engineering teams developing fraud detection classifiers, working to translate policy intent into technical artifacts and training signals.</p>
<p>If you have experience working as a Trust &amp; Safety professional with a focused background in fraud, scams, or financial crime, particularly in a tech platform or AI context, you may be a good fit for this role.</p>
<p>Preferred qualifications include experience at a major technology platform, financial institution, or fraud intelligence firm in a policy, operations, or investigative capacity, familiarity with the generative AI risk landscape, and background in threat intelligence, financial crimes compliance (AML/KYC), or law enforcement focused on cyber-enabled fraud.</p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>policy design, fraud and scam analysis, threat modeling, automated enforcement systems, human review workflows, ML and Engineering collaboration, generative AI risk landscape, threat intelligence, financial crimes compliance, law enforcement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174857008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>