<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>7cc85573-4a2</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p>We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.</p>
<p>This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.</p>
<p>In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.</p>
<p>You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.</p>
<p>You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.</p>
<p>You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.</p>
<p>You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.</p>
<p>You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.</p>
<p>You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.</p>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Cybersecurity, Vulnerability research, Exploit development, Network security, Malware analysis, Penetration testing, Detection, Monitoring, Incident response, Scientific computing, Data analysis, Programming (Python), Responsible disclosure practices, Vulnerability coordination, Cybersecurity frameworks (MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, Large language models, ML-based security systems, Adversarial ML research, Defense, intelligence, or security organizations, Published security research, Disclosed vulnerabilities, Bug bounty programs, Trust &amp; Safety operations, Content moderation at scale, Certifications (OSCP, OSCE, GXPN)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62900fcd-562</externalid>
      <Title>Security Engineer - Offensive Security</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>
<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>
<p>Responsibilities:</p>
<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>
<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>
<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>
<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>
<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>
<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>
<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>
<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>
<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>
<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>
<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>
<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>
<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>
<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&amp;CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses. It has a large user base, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7820898</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0287c3-e30</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>
<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
<li>Are driven to find solutions to complex, high-stakes problems</li>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
<li>Have strong software engineering skills, particularly in Python</li>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
<li>Thrive in collaborative environments</li>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
<li>Research or professional experience applying LLMs to security problems</li>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
<li>Experience building security tools or automation</li>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>28f97bd7-3d7</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs.</p>
<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p>Responsibilities:</p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef01837a-5e3</externalid>
      <Title>Anthropic Fellows Program — AI Security</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Anthropic Fellows Program is a 4-month, full-time research opportunity for individuals to work on empirical AI research and engineering projects. As an AI Security Fellow, you will be part of a team that focuses on reducing catastrophic risks from advanced AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Conduct empirical AI research and engineering projects aligned with Anthropic&#39;s research priorities</li>
<li>Collaborate with mentors and peers to achieve project goals</li>
<li>Present research findings and results to the team and wider community</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Fluency in Python programming</li>
<li>Strong technical background in computer science, mathematics, or physics</li>
<li>Ability to implement ideas quickly and communicate clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with pentesting, vulnerability research, or other offensive security work</li>
<li>Experience with empirical ML research projects</li>
<li>Experience with deep learning frameworks and experiment management</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>To participate in the Fellows program, you must have work authorization in the UK and be located in the UK during the program</li>
<li>Workspace locations: London and Berkeley</li>
<li>Visa sponsorship: Not currently available</li>
</ul>
<p><strong>Application Process</strong></p>
<p>Applications and interviews are managed by Constellation, our official recruiting partner for this program. Clicking &#39;Apply here&#39; will redirect you to Constellation&#39;s application portal.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior|staff|executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$3,850 USD / £2,310 / $4,300 CAD per week</Salaryrange>
      <Skills>Python, Computer Science, Mathematics, Physics, Pentesting, Vulnerability Research, Offensive Security Work, Empirical ML Research Projects, Deep Learning Frameworks, Experiment Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030244008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5fba9d7d-674</externalid>
      <Title>AI Security Fellow</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>AI Security at Anthropic</strong></p>
<p>We believe we are at an inflection point for AI&#39;s impact on cybersecurity. Models are now useful for cybersecurity tasks in practice: for example, Claude can now outperform human teams in some cybersecurity competitions and help us discover vulnerabilities in our own code.</p>
<p>We are looking for researchers and engineers to help us accelerate defensive use of AI to secure code and infrastructure.</p>
<p><strong>Anthropic Fellows Program Overview</strong></p>
<p>The Anthropic Fellows Program is designed to accelerate AI security and safety research, and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI security and safety for four months.</p>
<p>Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).</p>
<p>We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.</p>
<p><strong>What to Expect</strong></p>
<ul>
<li>Direct mentorship from Anthropic researchers</li>
<li>Access to a shared workspace (in either Berkeley, California or London, UK)</li>
<li>Connection to the broader AI safety research community</li>
<li>Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD &amp; access to benefits (benefits vary by country)</li>
<li>Funding for compute (~$15k/month) and other research expenses</li>
</ul>
<p><strong>Mentors, Research Areas, &amp; Past Projects</strong></p>
<p>Fellows will undergo a project selection &amp; mentor matching process. Potential mentors include:</p>
<ul>
<li>Nicholas Carlini</li>
<li>Keri Warr</li>
<li>Evyatar Ben Asher</li>
<li>Keane Lucas</li>
<li>Newton Cheng</li>
</ul>
<p>On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:</p>
<ul>
<li>AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng</li>
<li>Strengthening Red Teams: A Modular Scaffold for Control Evaluations: Chloe Loughridge et al., mentored by Jon Kutasov and Joe Benton</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Are motivated by reducing catastrophic risks from advanced AI systems</li>
<li>Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic</li>
</ul>
<p><strong>Please note:</strong></p>
<p>We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit here at Anthropic. In previous cohorts, over 40% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on safety at other organisations.</p>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Experience with pentesting, vulnerability research, or other offensive security</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Experience with empirical ML research projects</li>
<li>Experience with deep learning frameworks and experiment management</li>
</ul>
<p><strong>Candidates must be:</strong></p>
<ul>
<li>Fluent in Python programming</li>
<li>Available to work full-time on the Fellows program for 4 months</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Interview process</strong></p>
<p>The interview process will include an initial application &amp; references check, technical assessments &amp; interviews, and a research discussion.</p>
<p><strong>Compensation</strong></p>
<p>The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week, for 4 months (with possible extension).</p>
<p><strong>Logistics</strong></p>
<p>Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.</p>
<p>Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.</p>
<p>Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>3,850 USD / 2,310 GBP / 4,300 CAD per week</Salaryrange>
      <Skills>Python programming, AI security, Cybersecurity, Empirical research, Machine learning, Deep learning, Experiment management, Open-source projects, Pentesting, Vulnerability research, Offensive security, CVEs, Bug bounty vulnerabilities, Empirical ML research projects, Deep learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030244008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c76d0c6d-ec7</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>
<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>
<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>
<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>
<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>
<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>
<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>
<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>
<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>
<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>
<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>
<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>
<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>
<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>The annual compensation for this role is not specified in the job posting.</Salaryrange>
      <Skills>cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust &amp; Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>45350b41-7eb</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
</ul>
<ul>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
</ul>
<ul>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
</ul>
<ul>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
</ul>
<ul>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
</ul>
<ul>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
</ul>
<ul>
<li>Are driven to find solutions to complex, high-stakes problems</li>
</ul>
<ul>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<ul>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
</ul>
<ul>
<li>Research or professional experience applying LLMs to security problems</li>
</ul>
<ul>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
</ul>
<ul>
<li>Experience building security tools or automation</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. The company has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b0cdccea-4ed</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs. We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, communication skills, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems. The company is headquartered in San Francisco, CA.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>