<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>b95267e5-333</externalid>
      <Title>Security Researcher, Codex Security</Title>
      <Description><![CDATA[<p>Job Title: Security Researcher, Codex Security</p>
<p>Compensation:</p>
<p>$325K – $405K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>About the Team:</p>
<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.</p>
<p>Codex Security is OpenAI’s first security agent, built to scan GitHub Cloud repositories, validate real vulnerabilities, and integrate with Codex to help generate fixes.</p>
<p>About the Role:</p>
<p>Lead an effort to map, characterise, and prioritise cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components. You’ll drive offensive research, produce technical deliverables, enhance the Codex Security product line, and serve as OpenAI’s primary technical counterpart for select external partners (including potential U.S. government stakeholders).</p>
<p>Responsibilities:</p>
<ul>
<li>Conduct deep security research on real-world software systems to discover complex vulnerabilities across large codebases and distributed architectures.</li>
</ul>
<ul>
<li>Investigate and validate vulnerabilities discovered by AI-driven security agents, including building proofs-of-concept and exploit demonstrations.</li>
</ul>
<ul>
<li>Partner with engineering teams to improve automated vulnerability discovery, validation, and remediation workflows as part of product development.</li>
</ul>
<ul>
<li>Build high-quality security datasets and evals that will help advance model’s cybersecurity capabilities</li>
</ul>
<ul>
<li>Train and improve AI models used for vulnerability discovery, validation, and automated remediation by developing datasets, evaluations, and feedback loops grounded in real-world security research.</li>
</ul>
<ul>
<li>Publish technical write-ups, research insights, and vulnerability analyses that advance the state of application security.</li>
</ul>
<p>You may thrive if you:</p>
<ul>
<li>Have strong experience in vulnerability research, exploit development, or offensive security.</li>
</ul>
<ul>
<li>Have deep experience with cutting-edge offensive-security techniques</li>
</ul>
<ul>
<li>Are fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end.</li>
</ul>
<ul>
<li>Operate independently, align diverse teams, and deliver on tight timelines.</li>
</ul>
<ul>
<li>Communicate clearly and concisely with experts and decision-makers.</li>
</ul>
<ul>
<li>Care deeply about improving the security of widely used software and open-source infrastructure.</li>
</ul>
<ul>
<li>Are a strong developer who can work in a small energetic team</li>
</ul>
<p>Goals &amp; impact:</p>
<ul>
<li>Build AI-driven systems that can discover high-impact vulnerabilities in widely deployed systems and open-source software before attackers do.</li>
</ul>
<ul>
<li>Improve the precision and effectiveness of AI-driven security agents by grounding them in real-world vulnerability research.</li>
</ul>
<p>Key technical challenges:</p>
<ul>
<li>System-level vulnerability discovery , identifying complex vulnerabilities that span multiple services, trust boundaries, or components.</li>
</ul>
<ul>
<li>High-confidence validation , distinguishing real exploitable vulnerabilities from speculative or theoretical issues.</li>
</ul>
<ul>
<li>Scaling security research with AI agents , guiding automated systems to analyse millions of commits while maintaining research-level rigor.</li>
</ul>
<ul>
<li>Automated exploit and proof-of-concept generation , building reproducible demonstrations of vulnerabilities within sandboxed environments.</li>
</ul>
<ul>
<li>Building large systems that work within OpenAI’s enterprise architecture</li>
</ul>
<p>About OpenAI</p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, colour, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</p>
<p>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</p>
<p>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</p>
<p>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf)</p>
<p>At OpenAI, we believe artificial intelligence has the potential to benefit society in countless ways, and we want to ensure that everyone has access to the resources they need to succeed. That’s why we’re committed to creating a workplace where everyone feels welcome, valued, and empowered to contribute their best work.</p>
<p>We strive to create a culture of inclusivity, diversity, and respect, where everyone feels comfortable sharing their ideas, perspectives, and experiences. We believe that our differences are what make us stronger, and we’re committed to fostering a workplace where everyone can thrive.</p>
<p>If you’re passionate about using AI to drive positive change and want to join a team that shares your values, we encourage you to apply for this role. Together, let’s build a brighter future for all.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$325K – $405K</Salaryrange>
      <Skills>vulnerability research, exploit development, offensive security, cutting-edge offensive-security techniques, AI/ML infrastructure, data, training, inference, schedulers, accelerators, threat-modeling, system-level vulnerability discovery, high-confidence validation, scaling security research with AI agents, automated exploit and proof-of-concept generation, large systems development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Codex Security</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>Codex Security is a security division of OpenAI, focused on ensuring the security of artificial general intelligence.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/468d7cab-66e9-4ae1-bac9-f2531148af31</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7cc85573-4a2</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p>We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.</p>
<p>This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.</p>
<p>In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.</p>
<p>You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.</p>
<p>You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.</p>
<p>You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.</p>
<p>You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.</p>
<p>You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.</p>
<p>You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.</p>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Cybersecurity, Vulnerability research, Exploit development, Network security, Malware analysis, Penetration testing, Detection, Monitoring, Incident response, Scientific computing, Data analysis, Programming (Python), Responsible disclosure practices, Vulnerability coordination, Cybersecurity frameworks (MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, Large language models, ML-based security systems, Adversarial ML research, Defense, intelligence, or security organizations, Published security research, Disclosed vulnerabilities, Bug bounty programs, Trust &amp; Safety operations, Content moderation at scale, Certifications (OSCP, OSCE, GXPN)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62900fcd-562</externalid>
      <Title>Security Engineer - Offensive Security</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>
<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>
<p>Responsibilities:</p>
<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>
<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>
<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>
<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>
<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>
<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>
<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>
<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>
<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>
<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>
<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>
<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>
<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>
<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&amp;CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses. It has a large user base, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7820898</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0287c3-e30</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>
<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
<li>Are driven to find solutions to complex, high-stakes problems</li>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
<li>Have strong software engineering skills, particularly in Python</li>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
<li>Thrive in collaborative environments</li>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
<li>Research or professional experience applying LLMs to security problems</li>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
<li>Experience building security tools or automation</li>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c76d0c6d-ec7</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>
<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>
<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>
<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>
<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>
<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>
<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>
<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>
<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>
<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>
<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>
<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>
<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>
<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>The annual compensation for this role is not specified in the job posting.</Salaryrange>
      <Skills>cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust &amp; Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>45350b41-7eb</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
</ul>
<ul>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
</ul>
<ul>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
</ul>
<ul>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
</ul>
<ul>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
</ul>
<ul>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
</ul>
<ul>
<li>Are driven to find solutions to complex, high-stakes problems</li>
</ul>
<ul>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<ul>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
</ul>
<ul>
<li>Research or professional experience applying LLMs to security problems</li>
</ul>
<ul>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
</ul>
<ul>
<li>Experience building security tools or automation</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. The company has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>