<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7cc85573-4a2</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p>We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.</p>
<p>This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.</p>
<p>In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.</p>
<p>You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.</p>
<p>You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.</p>
<p>You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.</p>
<p>You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.</p>
<p>You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.</p>
<p>You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.</p>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Cybersecurity, Vulnerability research, Exploit development, Network security, Malware analysis, Penetration testing, Detection, Monitoring, Incident response, Scientific computing, Data analysis, Programming (Python), Responsible disclosure practices, Vulnerability coordination, Cybersecurity frameworks (MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, Large language models, ML-based security systems, Adversarial ML research, Defense, intelligence, or security organizations, Published security research, Disclosed vulnerabilities, Bug bounty programs, Trust &amp; Safety operations, Content moderation at scale, Certifications (OSCP, OSCE, GXPN)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e48ec86-b97</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p><strong>Current Project Areas</strong></p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Ambiguity and iteration, Genuine curiosity, Passion for AI safety, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Secure enclaves, TPMs, Confidential computing primitives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0287c3-e30</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>
<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
<li>Are driven to find solutions to complex, high-stakes problems</li>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
<li>Have strong software engineering skills, particularly in Python</li>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
<li>Thrive in collaborative environments</li>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
<li>Research or professional experience applying LLMs to security problems</li>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
<li>Experience building security tools or automation</li>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>faffcca4-e94</externalid>
      <Title>Research Engineer, Cybersecurity Reinforcement Learning</Title>
      <Description><![CDATA[<p>About the role</p>
<p>We&#39;re hiring for the Cybersecurity RL team within Horizons. As a Research Engineer, you&#39;ll help to safely advance the capabilities of our models in secure coding, vulnerability remediation, and other areas of defensive cybersecurity.</p>
<p>This role blends research and engineering, requiring you to both develop novel approaches and realize them in code. Your work will include designing and implementing RL environments, conducting experiments and evaluations, delivering your work into production training runs, and collaborating with other researchers, engineers, and cybersecurity specialists across and outside Anthropic.</p>
<p>The role requires domain expertise in cybersecurity paired with interest or experience in training safe AI models. For example, you might be a white hat hacker who&#39;s curious about how LLMs could augment or transform your work, a security engineer interested in how AI could help harden systems at scale, or a detection and response professional wondering how models could enhance defensive workflows.</p>
<p>Responsibilities</p>
<ul>
<li>Design and implement RL environments for secure coding and vulnerability remediation</li>
<li>Conduct experiments and evaluations to assess the effectiveness of our models</li>
<li>Deliver your work into production training runs to advance the capabilities of our models</li>
<li>Collaborate with other researchers, engineers, and cybersecurity specialists across and outside Anthropic</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience in cybersecurity research</li>
<li>Experience with machine learning</li>
<li>Strong software engineering skills</li>
<li>Ability to balance research exploration with engineering implementation</li>
<li>Passion for AI&#39;s potential and commitment to developing safe and beneficial systems</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Professional experience in security engineering, fuzzing, detection and response, or other applied defensive work</li>
<li>Experience participating in or building CTF competitions and cyber ranges</li>
<li>Academic research experience in cybersecurity</li>
<li>Familiarity with RL techniques and environments</li>
<li>Familiarity with LLM training methodologies</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>cybersecurity research, machine learning, software engineering, research exploration, engineering implementation, security engineering, fuzzing, detection and response, RL techniques, LLM training methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025624008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c76d0c6d-ec7</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>
<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>
<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>
<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>
<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>
<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>
<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>
<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>
<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>
<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>
<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>
<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>
<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>
<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>The annual compensation for this role is not specified in the job posting.</Salaryrange>
      <Skills>cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust &amp; Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>45350b41-7eb</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
</ul>
<ul>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
</ul>
<ul>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
</ul>
<ul>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
</ul>
<ul>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
</ul>
<ul>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
</ul>
<ul>
<li>Are driven to find solutions to complex, high-stakes problems</li>
</ul>
<ul>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<ul>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
</ul>
<ul>
<li>Research or professional experience applying LLMs to security problems</li>
</ul>
<ul>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
</ul>
<ul>
<li>Experience building security tools or automation</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. The company has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b0188062-45f</externalid>
      <Title>Research Engineer, Cybersecurity Reinforcement Learning</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We&#39;re hiring for the Cybersecurity RL team within Horizons. As a Research Engineer, you&#39;ll help to safely advance the capabilities of our models in secure coding, vulnerability remediation, and other areas of defensive cybersecurity.</p>
<p>This role blends research and engineering, requiring you to both develop novel approaches and realise them in code. Your work will include designing and implementing RL environments, conducting experiments and evaluations, delivering your work into production training runs, and collaborating with other researchers, engineers, and cybersecurity specialists across and outside Anthropic.</p>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have experience in cybersecurity research.</li>
<li>Have experience with machine learning.</li>
<li>Have strong software engineering skills.</li>
<li>Can balance research exploration with engineering implementation.</li>
<li>Are passionate about AI&#39;s potential and committed to developing safe and beneficial systems.</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Professional experience in security engineering, fuzzing, detection and response, or other applied defensive work.</li>
<li>Experience participating in or building CTF competitions and cyber ranges.</li>
<li>Academic research experience in cybersecurity.</li>
<li>Familiarity with RL techniques and environments.</li>
<li>Familiarity with LLM training methodologies.</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lot more.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>cybersecurity research, machine learning, software engineering, RL techniques and environments, LLM training methodologies, security engineering, fuzzing, detection and response, CTF competitions and cyber ranges, academic research in cybersecurity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation headquartered in San Francisco, focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025624008</Applyto>
      <Location>San Francisco, CA, New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>f940647d-c39</externalid>
      <Title>SOC Engineer</Title>
      <Description><![CDATA[<p>We are looking for a SOC Engineer to join our Security Operations team and help defend a fast-moving, cloud-native AI vibe-coding platform. In this role, you will stay on top of emerging threats—from 0-days and active exploitation campaigns to bug bounty findings and customer-reported issues—and rapidly determine their relevance and potential impact to Replit.</p>
<p>This is a hands-on, investigative role requiring strong technical depth, understanding of modern software engineering and CI/CD systems, familiarity with cloud-native infrastructure (especially GCP), and the ability to work across multiple teams in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Threat Awareness &amp; Rapid Assessment</strong></p>
<ul>
<li>Continuously monitor emerging threats, including bad actor activity, 0-day vulnerabilities, public exploitation campaigns, bug bounty reports, and customer-reported security issues</li>
</ul>
<ul>
<li>Quickly assess the applicability of these threats to Replit’s cloud infrastructure, SaaS services, internal tooling, and platform components.</li>
</ul>
<p><strong>Investigation &amp; Impact Analysis</strong></p>
<ul>
<li>Conduct targeted investigations to determine whether Replit is already impacted by a newly discovered threat, vulnerability, or exploit.</li>
</ul>
<ul>
<li>Analyze logs, telemetry, and system behaviors using SIEM, metrics, Cloud Logging, and related tools.</li>
</ul>
<ul>
<li>Identify gaps or weaknesses in existing detection or visibility and propose improvements.</li>
</ul>
<p><strong>Containment, Mitigation &amp; Cross-Team Collaboration</strong></p>
<ul>
<li>Research potential impact paths and develop mitigation strategies for confirmed or applicable threats.</li>
</ul>
<ul>
<li>Partner closely with Security, SRE, and Engineering teams to coordinate and implement containment, patches, configuration updates, or code-level fixes.</li>
</ul>
<ul>
<li>Document findings, mitigations, and follow-up actions clearly for internal teams.</li>
</ul>
<p><strong>Required Skills &amp; Experience</strong></p>
<ul>
<li>Strong understanding of software engineering fundamentals, including code structure, build systems, dependencies, and package ecosystems—enabling effective partnership with Engineering teams.</li>
</ul>
<ul>
<li>Understanding of CI/CD pipelines and DevOps workflows, enabling collaboration with Infrastructure and DevOps teams.</li>
</ul>
<ul>
<li>Solid knowledge of cloud architecture, especially Google Cloud Platform (GCP) services used in modern cloud-native deployments.</li>
</ul>
<ul>
<li>Familiarity with SaaS architectures, identity systems, and integration patterns for effective collaboration with Cloud Security teams.</li>
</ul>
<ul>
<li>Hands-on experience with SIEM, Cloud Logging, and log-based investigation workflows.</li>
</ul>
<ul>
<li>Ability to perform investigations using log data, behavioral indicators, and threat intelligence.</li>
</ul>
<ul>
<li>General understanding of vulnerability lifecycles, exploitability analysis, and common attack vectors.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with threat intelligence, security research, or vulnerability analysis.</li>
</ul>
<ul>
<li>Familiarity with Kubernetes, containers, serverless infrastructure, or modern distributed systems.</li>
</ul>
<ul>
<li>Ability to write scripts or small tools for investigation or automation (Python, Go, Bash).</li>
</ul>
<ul>
<li>Experience working with bug bounty programs or coordinated vulnerability disclosure workflows.</li>
</ul>
<ul>
<li>Experience in fast-paced, cloud-native, or AI/ML-driven environments.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Curiosity &amp; initiative: Strong desire to understand attacker behaviors, emerging threats, and how they apply to real-world systems.</li>
</ul>
<ul>
<li>Speed &amp; analytical rigor: Ability to quickly assess high-risk vulnerabilities with clear, evidence-based reasoning.</li>
</ul>
<ul>
<li>Collaboration: Comfort working across cross-functional teams spanning Security, SRE, Engineering, and Infrastructure.</li>
</ul>
<ul>
<li>Clear communication: Ability to explain findings, risks, and mitigation strategies to stakeholders at all levels.</li>
</ul>
<ul>
<li>Ownership mindset: Takes initiative to drive investigations, improvements, and remediations to completion</li>
</ul>
<ul>
<li>Continuous learning: Passion for staying up to date on new vulnerabilities, exploit trends, and cloud-native security best practices.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<p>💰 Competitive Salary &amp; Equity</p>
<p>💹 401(k) Program with a 4% match</p>
<p>⚕️ Health, Dental, Vision and Life Insurance</p>
<p>🩼 Short Term and Long Term Disability</p>
<p>🚼 Paid Parental, Medical, Caregiver Leave</p>
<p>🚗 Commuter Benefits</p>
<p>📱 Monthly Wellness Stipend</p>
<p>🧑‍💻 Autonomous Work Environment</p>
<p>🖥 In Office Set-Up Reimbursement</p>
<p>🏝 Flexible Time Off (FTO) + Holidays</p>
<p>🚀 Quarterly Team Gatherings</p>
<p>☕ In Office Amenities</p>
<p><strong>Want to learn more about what we are up to?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
</ul>
<ul>
<li>Replit: Make an app for that</li>
</ul>
<ul>
<li>Replit Blog</li>
</ul>
<ul>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
</ul>
<ul>
<li>Reasons not to work at Replit</li>
</ul>
<p>To achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K – $250K</Salaryrange>
      <Skills>software engineering fundamentals, CI/CD systems, cloud-native infrastructure, GCP services, SaaS architectures, identity systems, integration patterns, SIEM, Cloud Logging, log-based investigation workflows, vulnerability lifecycles, exploitability analysis, common attack vectors, threat intelligence, security research, vulnerability analysis, Kubernetes, containers, serverless infrastructure, modern distributed systems, Python, Go, Bash, bug bounty programs, coordinated vulnerability disclosure workflows, fast-paced, cloud-native, AI/ML-driven environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading provider of cloud-native AI vibe-coding platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/54051fe0-045f-46b1-a2b8-a730575b05eb</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>d83abc11-64e</externalid>
      <Title>Researcher, Misalignment Research</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>New York City; San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$380K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>Safety Systems sits at the forefront of OpenAI’s mission to build and deploy safe AGI, ensuring our most capable models can be released responsibly and for the benefit of society. Within Safety Systems, we are building a misalignment research team to focus on the most pressing problems for the future of AGI. Our mandate is to identify, quantify, and understand future AGI misalignment risks far in advance of when they can pose harm.</p>
<p>The work of this research taskforce spans four pillars:</p>
<ol>
<li><strong>Worst‑Case Demonstrations</strong> – Craft compelling, reality‑anchored demos that reveal how AI systems can go wrong. We focus especially on high importance cases where misaligned AGI could pursue goals at odds with human well being.</li>
</ol>
<ol>
<li><strong>Adversarial &amp; Frontier Safety Evaluations</strong> – Transform those demos into rigorous, repeatable evaluations that measure dangerous capabilities and residual risks. Topics of interest include deceptive behavior, scheming, reward hacking, deception in reasoning, and power-seeking, along with other related areas.</li>
</ol>
<ol>
<li><strong>System‑Level Stress Testing</strong> – Build automated infrastructure to probe entire product stacks, assessing end‑to‑end robustness under extreme conditions. We treat misalignment as an evolving adversary, escalating tests until we find breaking points even as systems continue to improve.</li>
</ol>
<ol>
<li><strong>Alignment Stress‑Testing Research</strong> – Investigate why mitigations break, publishing insights that shape strategy and next‑generation safeguards. We collaborate with other labs when useful and actively share misalignment findings to accelerate collective progress.</li>
</ol>
<p><strong><strong>About the Role</strong></strong></p>
<p>We are seeking a Senior Researcher who is passionate about red‑teaming and AI safety. In this role you will design and execute cutting‑edge attacks, build adversarial evaluations, and advance our understanding of how safety measures can fail—and how to fix them. Your insights will directly influence OpenAI’s product launches and long‑term safety roadmap.</p>
<p><strong><strong>In this role, you will</strong></strong></p>
<ul>
<li>Design and implement worst‑case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high stakes use cases described above.</li>
</ul>
<ul>
<li>Develop adversarial and system‑level evaluations grounded in those demonstrations, driving adoption across OpenAI.</li>
</ul>
<ul>
<li>Create automated tools and infrastructure to scale automated red‑teaming and stress testing.</li>
</ul>
<ul>
<li>Conduct research on failure modes of alignment techniques and propose improvements.</li>
</ul>
<ul>
<li>Publish influential internal or external papers that shift safety strategy or industry practice. We aim to concretely reduce existential AI risk.</li>
</ul>
<ul>
<li>Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.</li>
</ul>
<ul>
<li>Mentor engineers and researchers, fostering a culture of rigorous, impact‑oriented safety work.</li>
</ul>
<p><strong><strong>You might thrive in this role if you</strong></strong></p>
<ul>
<li>Already are thinking about these problems night and day, and share our mission to build safe, universally beneficial AGI and align with the OpenAI Charter.</li>
</ul>
<ul>
<li>Have 4+ years of experience in AI red‑teaming, security research, adversarial ML, or related safety fields.</li>
</ul>
<ul>
<li>Possess a strong research track record—publications, open‑source projects, or high‑impact internal work—demonstrating creativity in uncovering and exploiting system weaknesses.</li>
</ul>
<ul>
<li>Are fluent in modern ML / AI techniques and comfortable hacking on large‑scale codebases and evaluation infrastructure.</li>
</ul>
<ul>
<li>Communicate clearly with both technical and non‑technical audiences, translating complex findings into actionable recommendations.</li>
</ul>
<ul>
<li>Enjoy collaboration and can drive cross‑functional projects that span research, engineering, and policy.</li>
</ul>
<ul>
<li>Hold a Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related field.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K – $445K</Salaryrange>
      <Skills>AI red-teaming, security research, adversarial ML, safety fields, modern ML / AI techniques, large-scale codebases, evaluation infrastructure, publications, open-source projects, high-impact internal work, creativity in uncovering and exploiting system weaknesses</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and deploying artificial general intelligence (AGI) in a way that benefits society. With a team of researchers and engineers, OpenAI aims to create AGI that is safe and beneficial for humanity.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/7055f010-99f4-4c76-8361-ba5b5f9af1d0</Applyto>
      <Location>New York City; San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>