<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>a922c6ae-3c1</externalid>
      <Title>Technical CBRN-E  Threat Investigator</Title>
      <Description><![CDATA[<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>
<p>You will work at the intersection of AI safety and CBRN security, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against threat actors who may attempt to leverage our AI technology for developing weapons, synthesizing dangerous compounds, or creating biological harm.</p>
<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>
<p>Responsibilities:</p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>
</ul>
<ul>
<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>
</ul>
<ul>
<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>
</ul>
<ul>
<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<ul>
<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>
</ul>
<p>Strong candidates may also have</p>
<ul>
<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>
</ul>
<ul>
<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>
</ul>
<ul>
<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230,000-$290,000 USD</Salaryrange>
      <Skills>SQL, Python, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology misuse, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066997008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>753e9465-6a0</externalid>
      <Title>Senior Security Software Engineer, eBPF &amp; Security Sensors</Title>
      <Description><![CDATA[<p>We&#39;re seeking an exceptional engineer to join our Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call rotations</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>
<li>Have a track record of building and maintaining internal developer tools or security platforms</li>
<li>Have a strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
<li>Have experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>
<li>Have experience with infrastructure-as-code (Terraform, CloudFormation)</li>
<li>Have experience with query optimization for large datasets</li>
<li>Have experience building stable and scalable services on cloud infrastructure and serverless architectures</li>
<li>Can write maintainable and secure code in Python</li>
<li>Have experience working with security teams and translating requirements into technical solutions</li>
<li>Can lead technical projects with minimal guidance</li>
<li>Have a track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
<li>Can lead cross-functional security initiatives and navigate complex organizational dynamics</li>
<li>Have strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>
<li>Have demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Have strong systems thinking with the ability to identify and mitigate risks in complex environments</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Building security tooling from the ground up</li>
<li>Implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Detection engineering or security operations</li>
<li>SOAR platform or automation development</li>
<li>Data lake or database architecture</li>
<li>API design and internal platform creation</li>
<li>Applying ML/AI to security problems</li>
<li>Scaling security operations in a high-growth environment</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, test-driven software development, CI/CD, infrastructure-as-code, query optimization, cloud infrastructure, serverless architectures, building security tooling, implementing security monitoring solutions, detection engineering, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, applying ML/AI to security problems, scaling security operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108521008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dcc14ac2-f76</externalid>
      <Title>Security Software Engineer, Detection &amp; Response Platform</Title>
      <Description><![CDATA[<p>weeted job ad in markdown with  line breaks</p>
<p><strong>About the role</strong></p>
<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build AI-powered platform responsible for all aspects of D&amp;R capabilities from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call shifts</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>7+ years of experience in software engineering with a focus on security, infrastructure and/or data pipelines</li>
<li>Track record of building and maintaining internal developer tools or security platforms</li>
<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>Experience building security tooling from the ground up</li>
<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Background in detection engineering or security operations</li>
<li>SOAR platform/automation development</li>
<li>Data lake / Database architecture</li>
<li>API design and internal platform creation</li>
<li>Track record of applying ML/AI to security problems</li>
<li>Experience scaling security operations in a high-growth environment</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Python, Data pipelines, ML-powered detection systems, Security telemetry, Claude, Security operations, Incident response, Experience building security tooling from the ground up, Background in implementing security monitoring solutions (SIEM, log aggregation, EDR), Background in detection engineering or security operations, SOAR platform/automation development, Data lake / Database architecture, API design and internal platform creation, Track record of applying ML/AI to security problems, Experience scaling security operations in a high-growth environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4595463008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1410a549-44e</externalid>
      <Title>Director of Machine Learning, Safety &amp; Mods</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Director of Machine Learning to lead Reddit&#39;s efforts in building industry-leading ML systems that keep our platform safe and foster healthy online communities.</p>
<p>This leader will drive the strategy, development, and deployment of machine learning models that detect and prevent harmful content and behavior at scale.</p>
<p>In this role, you will own the roadmap for Safety and moderation ML, lead a team of applied scientists and engineers, and partner cross-functionally across Product, Engineering, Safety operations, Trust &amp; Community, and AI/ML Platform to innovate on real-time detection, automation, and user protection systems.</p>
<p>You will leverage modern ML , including fine-tuned LLMs , to ensure Reddit remains a safe, welcoming, and positive environment for our global user base.</p>
<p>Responsibilities:</p>
<ul>
<li>Set the vision and strategy for applying ML to Trust &amp; Safety, ensuring scalable, proactive protection against evolving abuse patterns.</li>
</ul>
<ul>
<li>Lead and grow a high-performing Safety ML organization, including applied research, model development, productionization, and continuous improvement.</li>
</ul>
<ul>
<li>Develop and deploy cutting-edge Safety ML systems (including fine-tuned LLMs and transformer models) that outperform state-of-the-art solutions in quality, latency, and efficiency.</li>
</ul>
<ul>
<li>Partner with Trust &amp; Safety, Product, Moderation, and AI/ML Platform teams to identify safety risks, emerging harm vectors, and ML opportunities that improve detection, enforcement, and user experience.</li>
</ul>
<ul>
<li>Drive successful experimentation, evaluation, and model lifecycle management, ensuring high precision, recall, explainability, and policy alignment.</li>
</ul>
<ul>
<li>Champion ethical and responsible AI practices in all Safety ML solutions.</li>
</ul>
<ul>
<li>Track performance through metrics, research-based iteration, and alignment with Reddit’s safety policies and regulatory standards.</li>
</ul>
<ul>
<li>Represent Safety ML leadership internally and externally , including conferences, publications, industry groups, and cross-company collaboration initiatives.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>10+ years of experience in Machine Learning, AI, or applied research, with a strong background in Trust &amp; Safety, abuse prevention, detection, or content integrity.</li>
</ul>
<ul>
<li>5+ years of experience leading multi-disciplinary ML teams (applied science, engineering, analytics) in a high-growth or high-impact environment.</li>
</ul>
<ul>
<li>Proven track record of shipping ML systems at scale in production, ideally including transformer-based models and LLM fine-tuning.</li>
</ul>
<ul>
<li>Depth in NLP, content understanding, detection systems, supervised and weak-supervision techniques.</li>
</ul>
<ul>
<li>Strong cross-functional leadership skills, with ability to influence executives and foster alignment across Safety, Product, and Engineering.</li>
</ul>
<ul>
<li>Thought leadership in responsible AI, safety ML research, or safety measurement frameworks.</li>
</ul>
<p>Bonus points if you have:</p>
<ul>
<li>Experience building or operating real-time abuse detection and automated moderation systems in a complex user-generated content ecosystem.</li>
</ul>
<ul>
<li>Prior work in consumer-facing tech, social platforms, or large-scale community-driven products.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
</ul>
<ul>
<li>401k with Employer Match</li>
</ul>
<ul>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
</ul>
<ul>
<li>Family Planning Support</li>
</ul>
<ul>
<li>Gender-Affirming Care</li>
</ul>
<ul>
<li>Mental Health &amp; Coaching Benefits</li>
</ul>
<ul>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
</ul>
<ul>
<li>Generous Paid Parental Leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,800-$365,100 USD</Salaryrange>
      <Skills>Machine Learning, AI, Applied Research, Trust &amp; Safety, Abuse Prevention, Detection, Content Integrity, NLP, Content Understanding, Detection Systems, Supervised and Weak-Supervision Techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 100,000 active communities and 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7430544</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>68c29e94-faa</externalid>
      <Title>Technical Cyber Threat Investigator</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are looking for a Technical Cyber Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for malicious cyber operations.</p>
<p>You will work at the intersection of AI safety and cybersecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging cyber threats in the rapidly evolving landscape of AI-enabled risks. Your work will directly protect the broader ecosystem from sophisticated threat actors who seek to leverage AI technology for harm.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for cyber operations, including influence operations, malware development, social engineering, and other adversarial activities</li>
</ul>
<ul>
<li>Develop abuse signals and tracking strategies to proactively detect sophisticated threat actors across our platform</li>
</ul>
<ul>
<li>Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, using open-source research, dark web monitoring, and internal data</li>
</ul>
<ul>
<li>Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm at scale</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem to anticipate how AI systems could be misused, generating and publishing reports</li>
</ul>
<ul>
<li>Build and maintain relationships with external threat intelligence partners, information sharing communities, and government stakeholders</li>
</ul>
<ul>
<li>Work cross-functionally to build out our threat intelligence program, establishing processes, tools, and best practices</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with large language models and understanding of how AI technology could be misused for cyber threats</li>
</ul>
<ul>
<li>Have subject matter expertise in abusive user behaviour detection, such as influence operations, coordinated inauthentic behaviour, or cyber threat intelligence</li>
</ul>
<ul>
<li>Have experience tracking threat actors across surface, deep, and dark web environments</li>
</ul>
<ul>
<li>Can derive insights from large datasets to make key decisions and recommendations</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilising threat intelligence frameworks (MITRE ATT&amp;CK, etc.)</li>
</ul>
<ul>
<li>Have strong project management skills and ability to build processes from the ground up</li>
</ul>
<ul>
<li>Possess excellent communication skills to collaborate with cross-functional teams and present to leadership</li>
</ul>
<p><strong>Strong candidates may also have</strong></p>
<ul>
<li>Experience working with government agencies or in regulated environments</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p><strong>Deadline to apply</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/career</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000 - $290,000 USD</Salaryrange>
      <Skills>SQL, Python, large language models, AI technology, cyber threats, abusive user behaviour detection, threat actor profiling, threat intelligence frameworks, project management, communication skills, experience working with government agencies, background in AI safety, machine learning security, technology abuse investigation, experience building and scaling threat detection systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation working to create reliable, interpretable, and steerable AI systems. Its mission is to make AI safe and beneficial for users and society.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066995008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c8d7ea06-b25</externalid>
      <Title>Technical CBRN-E Threat Investigator</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>
</ul>
<ul>
<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>
</ul>
<ul>
<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>
</ul>
<ul>
<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<p><strong>Strong candidates may also have</strong></p>
<ul>
<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>
</ul>
<ul>
<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>
</ul>
<ul>
<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000 - $290,000USD</Salaryrange>
      <Skills>SQL, Python, CBRN-E threat domains, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology, stakeholder management, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066997008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>1ee98770-f81</externalid>
      <Title>Technical Influence Operations Threat Investigator</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behaviour, and other forms of information manipulation.</p>
<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>
<p>_Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays._</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behaviour, astroturfing, and narrative manipulation campaigns</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyse large datasets, trace user behaviour patterns, and uncover coordinated networks of threat actors conducting influence operations</li>
</ul>
<ul>
<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioural clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>
</ul>
<ul>
<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>
</ul>
<ul>
<li>Monitor and analyse state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>
</ul>
<ul>
<li>Forecast how advances in AI technology—including improved content generation, voice synthesis, and multimodal capabilities—will reshape the influence operations landscape and inform safety-by-design strategies</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behaviour, disinformation, or information warfare</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>
</ul>
<ul>
<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<ul>
<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>
</ul>
<ul>
<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>
</ul>
<ul>
<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>
</ul>
<ul>
<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>
</ul>
<ul>
<li>Familiarity with social network analysis techniques and tools for mapping coordinated behaviour</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration attorney to assist with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000 - $290,000 USD</Salaryrange>
      <Skills>SQL, Python, influence operations, disinformation, coordinated inauthentic behaviour, astroturfing, narrative manipulation campaigns, large language models, open-source intelligence (OSINT) methodologies, social network analysis techniques, fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic, background in intelligence analysis, information operations, or counter-disinformation, experience building and scaling threat detection systems or abuse monitoring programs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation that aims to create reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5140239008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>716d3247-e3f</externalid>
      <Title>ML/Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the role</strong></p>
<p>We are looking for ML Engineers and Research Engineers to help detect and mitigate misuse of our AI systems. As a member of the Safeguards ML team, you will build systems that identify harmful use—from individual policy violations to sophisticated, coordinated attacks—and develop defenses that keep our products safe as capabilities advance. You will also work on systems that protect user wellbeing and ensure our models behave appropriately across a wide range of contexts. This work feeds directly into Anthropic&#39;s Responsible Scaling Policy commitments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop classifiers to detect misuse and anomalous behavior at scale. This includes developing synthetic data pipelines for training classifiers and methods to automatically source representative evaluations to iterate on</li>
<li>Build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts</li>
<li>Evaluate and improve the safety of agentic products—developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks</li>
<li>Conduct research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have 4+ years of experience in ML engineering, research engineering, or applied research, in academia or industry</li>
<li>Have proficiency in Python and experience building ML systems</li>
<li>Are comfortable working across the research-to-deployment pipeline, from exploratory experiments to production systems</li>
<li>Are worried about misuse risks of AI systems, and want to work to mitigate them</li>
<li>Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>
</ul>
<p><strong>Strong candidates may also have experience with</strong></p>
<ul>
<li>Language modeling and transformers</li>
<li>Building classifiers, anomaly detection systems, or behavioral ML</li>
<li>Adversarial machine learning or red-teaming</li>
<li>Interpretability or probes</li>
<li>Reinforcement learning</li>
<li>High-performance, large-scale ML systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship</strong></p>
<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $500,000USD</Salaryrange>
      <Skills>Python, Machine Learning, Research Engineering, Adversarial Machine Learning, Red-teaming, Interpretability, Probes, Reinforcement Learning, High-performance, large-scale ML systems, Language modeling and transformers, Building classifiers, anomaly detection systems, or behavioral ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation headquartered in San Francisco, with a mission to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4949336008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>bca7b9c2-2e3</externalid>
      <Title>Senior Security Software Engineer, eBPF &amp; Security Sensors</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call rotations</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>
<li>Track record of building and maintaining internal developer tools or security platforms</li>
<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
<li>Experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>
<li>Experience with infrastructure-as-code (Terraform, CloudFormation)</li>
<li>Experience with query optimization for large datasets</li>
<li>Experience building stable and scalable services on cloud infrastructure and serverless architectures</li>
<li>Ability to write maintainable and secure code in Python</li>
<li>Experience working with security teams and translating requirements into technical solutions</li>
<li>Ability to lead technical projects with minimal guidance</li>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
<li>Ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
<li>Strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<p><strong>Strong candidates may also have experience with</strong></p>
<ul>
<li>Experience building security tooling from the ground up</li>
<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Background in detection engineering or security operations</li>
<li>Experience with SOAR platform or automation development</li>
<li>Experience with data lake or database architecture</li>
<li>Experience with API design and internal platform creation</li>
<li>Track record of applying ML/AI to security problems</li>
<li>Experience scaling security operations in a high-growth environment</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, Terraform, CloudFormation, query optimization, альную services, cloud infrastructure, serverless architectures, security tooling, SIEM, log aggregation, EDR, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, ML/AI to security problems, scaling security operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108521008</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>