<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62900fcd-562</externalid>
      <Title>Security Engineer - Offensive Security</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>
<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>
<p>Responsibilities:</p>
<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>
<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>
<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>
<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>
<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>
<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>
<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>
<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>
<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>
<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>
<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>
<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>
<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>
<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&amp;CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses. It has a large user base, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7820898</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e992e68-7cd</externalid>
      <Title>Staff Engineer, Offensive Security</Title>
      <Description><![CDATA[<p>As a Staff Engineer, Offensive Security at Twilio, you will act as a Technical Lead and design complex attack chains that demonstrate systemic risk. You will spend as much time writing custom code and researching new bypasses as you do executing tests.</p>
<p>In this role, you will:</p>
<p>Perform manual and automated testing of web applications, APIs, and mobile apps (iOS/Android). Conduct network and cloud level assessments with various tooling. Triage and validate reports from automated scanners or bug bounty hunters to eliminate false positives and escalate true positives. Perform initial prompt injection and jailbreak tests on AI prototypes, services, and applications using established checklists (OWASP Top 10 for LLMs). Draft high-quality reports that detail the &quot;path to compromise&quot; with clear, reproducible steps for developers. Manage and update the team&#39;s testing infrastructure (e.g., Burp Suite, and basic C2 listeners). Provide direct technical guidance to engineering teams on how to patch vulnerabilities like XSS, SQLi, and IDOR. Design and lead multi-week Red Team operations that mimic specific threat actors (APTs) to test the SIRT detection capabilities. Build custom payloads, droppers, and obfuscated scripts to bypass EDR/AV and maintain stealth. Build automated testing frameworks for AI systems (e.g., using PyRIT, Promptfoo, or Garak) to test for models related to sensitive data leakage. Execute sophisticated attacks against AWS/Azure/K8s, focusing on IAM misconfigurations and container escapes. Collaborate with SIRT and Detection Engineering to tune SIEM alerts based on the techniques used during an engagement. Oversee the organization&#39;s bug bounty program, identifying trends in submissions to suggest broad architectural security changes.</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Offensive security, Penetration testing, Bug bounty, AppSec, Vulnerability exploitation, MITRE ATT&amp;CK matrix, OWASP Top 10 for web applications, OWASP Top 10 for LLMs, Post exploitation, Adversarial ML, Burp Suite professional, Nmap, Metasploit, Wireshark, LangChain, TensorFlow, C2 frameworks, Python, Bash, C++, Telecom expertise, Excellent written and verbal communication skills, Ability to influence and build effective working relationships with all levels of the organization, Proficiency in multiple languages applicable to the region</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7622285</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bdf949b3-c66</externalid>
      <Title>Databricks Enterprise Lead Security Architect -   Principal IT Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Lead Security Architect to join our team within Databricks IT. As a Lead Security Architect, you will be responsible for designing and implementing a secure and scalable architecture to protect our corporate assets. You will focus on key areas of IT security, including Identity and Access Management, Zero Trust architecture, and endpoint security, while also working to secure critical business applications and sensitive data.</p>
<p>Your expertise will be crucial in building proactive security strategies that align with our business goals and protect the company from an ever-evolving threat landscape. This position demands deep expertise in security principles and a comprehensive understanding of the entire infrastructure stack and IAM systems to design robust, future-ready security solutions.</p>
<p>You will be instrumental in safeguarding our systems&#39; resilience and integrity against ever-evolving cyber threats. You will play a critical role in shaping our security strategy for modern platforms across AWS, Azure, GCP, network infrastructure, storage, and SaaS solutions, help establish a strong least privilege (PoLP) model, providing specialized IAM expertise, and securely supporting SaaS with sensitive information (NHI).</p>
<p>You will also be a key contributor in building our internal strategy for secure AI development. Additionally, you will support the secure integration of SaaS platforms such as Google Workspace, collaboration tools, and GTM systems, maintaining alignment with enterprise security standards.</p>
<p>Close collaboration with cross-functional teams is essential to embed security throughout the technology stack.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and implement secure, scalable reference architectures for the Databricks IT across Cloud Infra (Compute, DBs, Network, Storage), SaaS, Custom Built Applications, Data &amp; AI systems.</li>
<li>Establish and enforce security controls for: Core Security Areas: - Databricks Workspace Management: Workspace isolation, Unity Catalog for data governance.</li>
<li>Secure Networking: VPC configs, PrivateLink, IP Allow Lists.</li>
<li>Identity and Access Management (IAM): SSO, SCIM user provisioning, RBAC via Un, Strong MFA best practices for enterprise identities and customers.</li>
<li>Data Encryption: At rest and in transit, customer-managed keys for critical assets.</li>
<li>Data Exfiltration Prevention: Admin console settings, VPC endpoint controls.</li>
<li>Cluster Security: User isolation, compliance with enhanced security monitoring/Compliance Security Profiles (HIPAA, PCI-DSS, FedRAMP).</li>
<li>Offensive Security: Test and challenge the effectiveness of the organization’s security defenses by mimicking the tactics, techniques, and procedures used by actual attackers.</li>
<li>Specialized Security Functions: - Non-human Identity Management: Design and implement secure authentication and authorization for automated systems (service accounts, API keys, machine identities), focusing on automation and integration with existing identity management systems.</li>
<li>IAM Best Practices: Develop and document comprehensive Identity and Access Management policies, including user provisioning, de-provisioning, access reviews, privileged access management, and multi-factor authentication, ensuring security and compliance.</li>
<li>Data Loss Prevention (DLP): Implement DLP solutions to identify, monitor, and protect sensitive data across endpoints, networks, and cloud environments, preventing unauthorized access, use, or transmission.</li>
<li>SaaS Proxy Design and Implementation: Design and implement cloud-based proxies for SaaS applications (SASE solutions) to provide secure access, enforce security policies, monitor user activity, and protect against threats.</li>
<li>Cloud Infrastructure Best Practices: Establish and document best practices for VPC configurations, cloud networking, and infrastructure as code using Terraform, ensuring secure network segmentation, routing, firewalls, and VPNs for consistent, automated, and secure deployments.</li>
<li>Least Privilege Access for Data Security: Design and implement data security controls based on the principle of least privilege, ensuring users and systems have only the minimum necessary access through fine-grained controls, data classification, and regular access reviews.</li>
<li>Guide internal IT on Databricks’ security and compliance certifications (SOC 2, ISO 27001/27017/27018, HIPAA, PCI-DSS, FedRAMP), and support security reviews/audits.</li>
<li>Support incident response, vulnerability management, threat modeling, and red teaming using audit logs, cluster policies, and enhanced monitoring.</li>
<li>Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs to enhance security posture.</li>
<li>Advise executive leadership on security architecture, risks, and mitigation.</li>
<li>Mentor security engineers and developers on secure design and best practices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Information Security, Engineering, or a related field</li>
<li>Master’s degree in Computer Science specifically in Information Security or a related discipline is strongly preferred</li>
<li>Minimum 12 years in cybersecurity, with 5+ in security architecture or senior technical roles.</li>
<li>Experience in FedRAMP High systems/ GovCloud preferred.</li>
<li>Must have direct experience designing and securing enterprise platforms in complex multi-cloud environments, deep knowledge of enterprise architecture and security features (control plane/data plane separation, network infra, workspace hardening, network segmentation/ isolation), and hands-on experience automating security controls with Terraform and scripting.</li>
<li>Proven expertise securing data analytics pipelines, SaaS integrations, and workload isolation in enterprise ecosystems.</li>
<li>Experience with Enterprise Security Analysis Tools and monitoring/security policy optimization.</li>
<li>Deep experience in threat modeling, design, PoC, and implementing large-scale enterprise solutions.</li>
<li>Extensive hands-on experience in AWS cloud security, network security, with knowledge of Zero Trust, Data Protection, and Appsec.</li>
<li>Strong understanding of enterprise IAM systems (Okta, SailPoint, VDI, Entra ID) and Data Protection.</li>
<li>Expert experience with SIEM platforms, XDR, and cloud-native threat detection tools.</li>
<li>Expert in web application security, OWASP, API security, and secure design and testing.</li>
<li>Hands-on experience with security automation is required, with proficiency in AI-assisted development, Python, Cursor, Lambda, Terraform, or comparable scripting/IaC tools for operational efficiency.</li>
<li>Industry certifications like CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, or AWS Certified Advanced Networking – Specialty (or equivalent) are preferred.</li>
<li>Ability to influence stakeholders and drive alignment.</li>
<li>Strategic thinker with a passion for security innovation, continuous improvement, and building scalable defenses.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Security Architecture, Identity and Access Management, Zero Trust, Endpoint Security, Data Encryption, Data Exfiltration Prevention, Cluster Security, Offensive Security, Non-human Identity Management, IAM Best Practices, Data Loss Prevention, SaaS Proxy Design and Implementation, Cloud Infrastructure Best Practices, Least Privilege Access for Data Security, Guide internal IT on Databricks’ security and compliance certifications, Support incident response, vulnerability management, threat modeling, and red teaming, Stay current on industry trends and emerging threats in GenAI, AI Agentic flow, MCPs, Advise executive leadership on security architecture, risks, and mitigation, Mentor security engineers and developers on secure design and best practices, Terraform, Python, Cursor, Lambda, AWS cloud security, Network security, Data Protection, Appsec, SIEM platforms, XDR, cloud-native threat detection tools, Web application security, OWASP, API security, Secure design and testing, AI-assisted development, Security automation, Scripting/IaC tools, CISSP, CCSP, CEH, AWS Certified Security – Specialty, AWS Certified Solutions Architect – Professional, AWS Certified Advanced Networking – Specialty</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a technology company that provides a cloud-based platform for data analytics and artificial intelligence.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8207910002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e48ec86-b97</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p><strong>Current Project Areas</strong></p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Ambiguity and iteration, Genuine curiosity, Passion for AI safety, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Secure enclaves, TPMs, Confidential computing primitives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0287c3-e30</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>
<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
<li>Are driven to find solutions to complex, high-stakes problems</li>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
<li>Have strong software engineering skills, particularly in Python</li>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
<li>Thrive in collaborative environments</li>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
<li>Research or professional experience applying LLMs to security problems</li>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
<li>Experience building security tools or automation</li>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>28f97bd7-3d7</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs.</p>
<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p>Responsibilities:</p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef01837a-5e3</externalid>
      <Title>Anthropic Fellows Program — AI Security</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Anthropic Fellows Program is a 4-month, full-time research opportunity for individuals to work on empirical AI research and engineering projects. As an AI Security Fellow, you will be part of a team that focuses on reducing catastrophic risks from advanced AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Conduct empirical AI research and engineering projects aligned with Anthropic&#39;s research priorities</li>
<li>Collaborate with mentors and peers to achieve project goals</li>
<li>Present research findings and results to the team and wider community</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Fluency in Python programming</li>
<li>Strong technical background in computer science, mathematics, or physics</li>
<li>Ability to implement ideas quickly and communicate clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with pentesting, vulnerability research, or other offensive security work</li>
<li>Experience with empirical ML research projects</li>
<li>Experience with deep learning frameworks and experiment management</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>To participate in the Fellows program, you must have work authorization in the UK and be located in the UK during the program</li>
<li>Workspace locations: London and Berkeley</li>
<li>Visa sponsorship: Not currently available</li>
</ul>
<p><strong>Application Process</strong></p>
<p>Applications and interviews are managed by Constellation, our official recruiting partner for this program. Clicking &#39;Apply here&#39; will redirect you to Constellation&#39;s application portal.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior|staff|executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$3,850 USD / £2,310 / $4,300 CAD per week</Salaryrange>
      <Skills>Python, Computer Science, Mathematics, Physics, Pentesting, Vulnerability Research, Offensive Security Work, Empirical ML Research Projects, Deep Learning Frameworks, Experiment Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030244008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8bf116df-95e</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>Job Title: Application Security Engineer</p>
<p>About the Role: The Application Security team at Anthropic is at the forefront of building security into every phase of the software development lifecycle. As an Application Security Engineer, you will partner closely with software engineers and researchers to ensure that security is a core consideration from initial design through implementation. You will lead threat modeling and secure design reviews to proactively identify and mitigate risks early, and help with continuous risk assessment. You will build tools and systems to support developers shipping code securely, adhering to secure coding best practices.</p>
<p>Responsibilities:</p>
<ul>
<li>Help secure AI products and internal tools that are introducing industry-novel security risks and pushing established security boundaries</li>
<li>Lead “shift left” security efforts to build security into the software development lifecycle</li>
<li>Conduct secure design reviews and threat modeling. Identify and prioritize risks, attack surfaces, and vulnerabilities</li>
<li>Develop tooling to scale security code reviews and respond to developer questions, including advising developers on remediating vulnerabilities and following secure coding practices</li>
<li>Manage Anthropic&#39;s vulnerability management program, including integrating data ingestion pipelines, coding logic to prioritize vulnerability fixes, supporting teams remediating vulnerabilities and developing automated systems at scale</li>
<li>Oversee Anthropic&#39;s bug bounty program. Set scope, validate submissions, perform root cause analysis, coordinate remediation with engineering teams, and award bounties. Cultivate relationships with the ethical hacker community</li>
<li>Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development</li>
<li>Develop and document security policies, standards, and playbooks. Conduct security awareness training for engineers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of hands-on experience in application and infrastructure security, including securing cloud-based and containerized environments</li>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
<li>Lead with empathy, a collaborative spirit, and a learning mindset to work cross-functionally with engineers of all levels to build security into the software development life cycle</li>
<li>Leverage creative and strategic thinking to reduce risk through secure design and simplicity, not just controls</li>
<li>Possess broad security knowledge to connect the dots across domains and identify holistic ways to decrease the overall threat surface</li>
<li>Are keen to distill complex security concepts into clear actions and drive consensus without direct authority</li>
<li>Embody a proactive mindset to thread security throughout the product lifecycle through activities like threat modeling, secure code review, and education</li>
<li>Have a strong grasp of offensive security to anticipate risks from an adversary&#39;s perspective, not just check compliance boxes</li>
<li>Bring experience with modern application stacks, infrastructure, and security tools to implement pragmatic defenses</li>
<li>Are practiced at collaborating cross-functionally and effectively balancing security requirements with business objectives</li>
<li>Advocate for security fundamentals like least privilege, defense-in-depth, and eliminating complexity that could sub-linearly scale security through smart design</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Hands-on technical expertise securing complex cloud environments and microservices architectures leveraging technologies like Kubernetes, Docker, and AWS / GCP</li>
<li>Exposure to offensive security techniques like vulnerability testing, bug bounty, pen testing, and red team exercises</li>
<li>Familiarity with AI/ML security risks such as prompt injection, data poisoning, model extraction, etc. and mitigations</li>
<li>Experience building security tools, applications, and automated tools</li>
<li>Solid foundational knowledge of both software and security engineering principles and are keen to continue learning</li>
<li>Excellent communication skills, able to distill complex security topics for broad audiences</li>
<li>Worked and thrived in fast-paced environments, and comfortable navigating ambiguity</li>
</ul>
<p>Annual Compensation Range:</p>
<p>$300,000-$405,000 USD</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>How to Apply:</p>
<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to reviewing your application!</p>
<p>Note:</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>application security, infrastructure security, cloud-based security, containerized environments, programming languages, Python, Rust, Go, Java, threat modeling, secure design reviews, vulnerability management, bug bounty program, security policies, standards, playbooks, security awareness training, hands-on technical expertise, complex cloud environments, microservices architectures, Kubernetes, Docker, AWS, GCP, offensive security techniques, vulnerability testing, pen testing, red team exercises, AI/ML security risks, prompt injection, data poisoning, model extraction, security tools, applications, automated tools, software engineering principles, communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4502508008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d67d51e-39e</externalid>
      <Title>CyberSecurity, Offensive Security Engineer</Title>
      <Description><![CDATA[<p>At Mistral AI, we&#39;re pushing the boundaries of what&#39;s possible with agentic systems,building products like Mistral Studio and Mistral Vibe that redefine how users interact with AI.</p>
<p>As a Security Researcher, you&#39;ll play a pivotal role in safeguarding these innovations by anticipating, identifying, and mitigating risks before they materialize. This isn&#39;t just about finding vulnerabilities; it&#39;s about shaping the future of secure AI by embedding an attacker&#39;s mindset into everything we build.</p>
<p>You&#39;ll work at the intersection of offensive security, AI safety, and product development, collaborating with cross-functional teams to harden our systems against evolving threats. Your expertise will directly influence how we design, deploy, and protect our most critical assets , ensuring our agents remain resilient, trustworthy, and ahead of adversaries.</p>
<p>Key responsibilities include:</p>
<ul>
<li><p>Proactively hunting for vulnerabilities in the interactions between our agentic applications, cloud infrastructure, and foundational models, with a focus on realistic, high-impact attack vectors.</p>
</li>
<li><p>Designing and executing red and purple team exercises, simulating sophisticated adversarial scenarios to stress-test our defenses and refine our detection capabilities.</p>
</li>
<li><p>Partnering with defensive teams to translate offensive insights into actionable improvements, from detection engineering to incident response.</p>
</li>
<li><p>Conducting in-depth penetration testing across our product suite, including AI-driven workflows, custom infrastructure, and user-facing interfaces.</p>
</li>
<li><p>Building and automating offensive tooling to scale your impact, leveraging cutting-edge techniques to stay ahead of emerging threats.</p>
</li>
<li><p>Communicating findings with clarity and conviction, ensuring technical and non-technical stakeholders understand risks and prioritize mitigations effectively.</p>
</li>
<li><p>Shaping Mistral AI&#39;s security strategy by contributing attacker-informed perspectives to threat modeling, risk assessment, and architectural decisions.</p>
</li>
</ul>
<p>We&#39;re looking for someone with 7+ years of offensive security experience, deep knowledge of AI/ML security risks, and hands-on experience assessing modern technology stacks. A builder&#39;s mindset, strong intuition for trust boundaries, and outstanding communication skills are also essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>offensive security, AI/ML security risks, custom Kubernetes deployments, cloud-native architectures, CI/CD pipelines, GitHub security best practices, macOS/Linux internals, Python/React-based applications, data science toolchains, AI/ML infrastructure, background in AI, data science, or related fields, experience in high-growth startups or research-driven organizations, expertise in adjacent disciplines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and integrates AI technology into daily working life, offering a comprehensive AI platform for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/2414ad08-5756-4875-afb5-04d26464b397</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>08f992cf-0e9</externalid>
      <Title>CyberSecurity Team Lead, Infrastructure and Application</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>Mistral AI is a technology company that develops and provides AI-powered solutions and platforms for enterprise use. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>As a CyberSecurity Team Lead, you will be responsible for architecting and enforcing the security posture of our entire technical stack, from on-premise foundations to cloud-native deployments. You will oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</p>
<p>Responsibilities</p>
<ul>
<li>Oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</li>
<li>Select, deploy, and maintain the tools needed for visibility and protection, including CNAPP, CSPM, SAST/DAST, secret scanning, and SBOM/CVE tracking.</li>
<li>Integrate security controls and automated gates directly into CI/CD pipelines to catch vulnerabilities before deployment (Shift Left).</li>
<li>Partner with engineering teams to interpret findings and &#39;ease the fix,&#39; providing patches, code snippets, or architectural advice to resolve issues quickly.</li>
<li>Define and maintain rigorous security guidelines and best practices for developers and system administrators.</li>
<li>Design and lead security awareness programs and technical training tailored for developers and admins to reduce human risk.</li>
<li>Track and define key security metrics (MTTR, coverage, vulnerability density) to visualize posture and progress to leadership.</li>
</ul>
<p>Requirements</p>
<ul>
<li>6+ years of experience in Information Security, with a specific focus on Application Security, Cloud Security, or DevSecOps.</li>
<li>Strong scripting skills (Python, Go, or Bash) to automate security tasks and integrate tools.</li>
<li>Deep understanding of CI/CD ecosystems and container orchestration (Kubernetes/Docker).</li>
<li>Hands-on experience with modern security tooling (e.g., Wiz, Snyk, SonarQube, Prisma, or similar enterprise tools).</li>
<li>Collaborative mindset: you view developers as partners, not adversaries, and focus on enabling them to code securely.</li>
<li>Clear communication, autonomous, and capable of translating technical security risks into actionable engineering tasks.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive health insurance</li>
<li>Flexible working hours</li>
<li>Professional development opportunities</li>
</ul>
<p>Note: The company may offer additional benefits not listed here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Application Security, Cloud Security, DevSecOps, CI/CD ecosystems, Container orchestration, Modern security tooling, Scripting skills, Collaborative mindset, Clear communication, Industry certifications, Infrastructure as Code, Offensive security, Prior experience securing large-scale AI or Machine Learning infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides AI-powered solutions and platforms for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c9b75928-dd48-4432-b6f1-fc0b24e51657</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d63f049e-ad7</externalid>
      <Title>Security Lead, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Security Lead, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the &#39;Agentic Launch Gap&#39;; the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p>As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.</li>
<li>Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.</li>
<li>Design Automated Validation Systems: Collaborate with Google teams to engineer &#39;Auto RedTeaming&#39; solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.</li>
<li>Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.</li>
<li>Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.</li>
<li>Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.</li>
</ul>
<p>About You:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).</li>
<li>Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.</li>
<li>Experience managing or technically leading small, high-performance engineering teams.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Familiarity with AI safety benchmarks and evaluation frameworks.</li>
<li>Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.</li>
<li>Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.</li>
</ul>
<p>The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,000 - $349,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science, Information Security, or equivalent practical experience, Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning, Deep technical understanding of LLM architectures and agentic workflows, Proven ability to work in a consulting capacity with product teams, Experience managing or technically leading small, high-performance engineering teams, Hands-on experience developing exploits for GenAI models, Familiarity with AI safety benchmarks and evaluation frameworks, Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers, Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7560787</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>f73f108d-30a</externalid>
      <Title>Senior Security Engineer, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Senior Security Engineer, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</p>
<p><strong>About Us</strong> The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the &#39;Agentic Launch Gap&#39;,the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p><strong>The Role</strong> As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work &#39;in the room&#39; with product builders, identifying architectural flaws during the design phase long before formal reviews begin.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.</li>
<li>Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.</li>
<li>Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks (&#39;Auto Red Teaming&#39;) that prevent regression in future model versions.</li>
<li>Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.</li>
<li>Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.</li>
</ul>
<p><strong>About You</strong> In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Strong coding skills in Python, Go, or C++ with experience building security tools or automation.</li>
<li>Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Experience working in a consulting capacity with product teams or in a fast-paced &#39;startup-like&#39; environment.</li>
<li>Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.</li>
<li>Ability to translate complex probabilistic risks into actionable engineering fixes for developers.</li>
</ul>
<p><strong>Salary &amp; Benefits</strong> The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000 - $244,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Python, Go, C++, Red Teaming, Offensive Security, Adversarial Machine Learning, LLM architectures, agentic workflows, chain-of-thought reasoning, AI vulnerability classes, prompt injection, adversarial examples, training data extraction, AI safety benchmarks, evaluation frameworks, fuzzing techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a technology company that specializes in artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7596438</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US; Zurich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>7bce292a-74f</externalid>
      <Title>CyberSecurity Team Lead, Infrastructure and Application</Title>
      <Description><![CDATA[<p>Role summary</p>
<p>Embedded directly within Mistral&#39;s Security Engineering ecosystem, you will architect and enforce the security posture of our entire technical stack, from on-premise foundations to cloud-native deployments.</p>
<p>As a CyberSecurity Team Lead, you will oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</p>
<p>You will select, deploy, and maintain the tools needed for visibility and protection, including CNAPP, CSPM, SAST/DAST, secret scanning, and SBOM/CVE tracking.</p>
<p>Integrate security controls and automated gates directly into CI/CD pipelines to catch vulnerabilities before deployment (Shift Left).</p>
<p>Partner with engineering teams to interpret findings and &#39;ease the fix,&#39; providing patches, code snippets, or architectural advice to resolve issues quickly.</p>
<p>Define and maintain rigorous security guidelines and best practices for developers and system administrators.</p>
<p>Design and lead security awareness programs and technical training tailored for developers and admins to reduce human risk.</p>
<p>Track and define key security metrics (MTTR, coverage, vulnerability density) to visualize posture and progress to leadership.</p>
<p>Who you are</p>
<p>• 6+ years of experience in Information Security, with a specific focus on Application Security, Cloud Security, or DevSecOps.</p>
<p>• Strong scripting skills (Python, Go, or Bash) to automate security tasks and integrate tools.</p>
<p>• Deep understanding of CI/CD ecosystems and container orchestration (Kubernetes/Docker).</p>
<p>• Hands-on experience with modern security tooling (e.g., Wiz, Snyk, SonarQube, Prisma, or similar enterprise tools).</p>
<p>• Collaborative mindset: you view developers as partners, not adversaries, and focus on enabling them to code securely.</p>
<p>• Clear communication, autonomous, and capable of translating technical security risks into actionable engineering tasks.</p>
<p>It would be ideal if you also have:</p>
<p>• Industry certifications such as CISSP, CCSP, OSCP, or cloud-specific security certifications.</p>
<p>• Strong Infrastructure as Code (IaC) experience with Terraform or Ansible.</p>
<p>• Experience in offensive security (Penetration Testing) to better understand attacker mindsets.</p>
<p>• Prior experience securing large-scale AI or Machine Learning infrastructure.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>hybrid</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Application Security, Cloud Security, DevSecOps, CI/CD, Container Orchestration, Modern Security Tooling, Scripting Skills, Infrastructure as Code, Industry Certifications, Infrastructure as Code, Offensive Security, Large-Scale AI or Machine Learning Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c9b75928-dd48-4432-b6f1-fc0b24e51657</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>ce09264c-2d9</externalid>
      <Title>Senior Cybersecurity Engineer</Title>
      <Description><![CDATA[<p>You are a passionate and experienced cybersecurity professional who thrives in fast-paced, global enterprise environments. With over five years of hands-on experience, you bring a deep understanding of enterprise-grade security solutions, including CASB, SSPM, WAF, firewalls, and email security. You have a proven track record in deploying, integrating, and managing network security solutions at scale, with a strong grasp of Zero Trust principles and architectures. Your expertise in CMMC regulations, technical data controls, and export authorization rules enables you to enforce U.S. person–only access restrictions for sensitive systems and datasets.</p>
<p>As a collaborative problem-solver, you are comfortable working across teams—from executives to engineers—to ensure robust security controls and compliance. You excel at conducting security investigations, analyzing complex events and alerts, and developing actionable metrics. Your familiarity with modern security frameworks, such as MITRE ATT&amp;CK and Cyber Kill Chain, empowers you to identify and mitigate threats proactively. You are detail-oriented, organized, and adept at multitasking, thriving in environments that require prioritization and agility.</p>
<p>You are committed to ongoing learning, staying current with emerging security technologies and frameworks. Your experience spans cloud security (AWS, GCP, Azure), offensive security, and incident response. You enjoy participating in audits and assessments, contributing to a culture of continuous improvement. With strong communication skills and an inclusive mindset, you foster trust and collaboration across diverse teams. If you’re ready to make an impact at the forefront of cybersecurity innovation, Synopsys is the place for you.</p>
<p>Design, deploy, and manage enterprise-grade security solutions including CASB, SSPM, WAF, firewalls, and email protection across global environments. Integrate and implement network security solutions, ensuring seamless operation and compliance with Zero Trust security principles. Enforce CMMC regulations, technical data controls, and export authorization rules, including U.S. person-only access restrictions for controlled systems and datasets. Conduct and support external audits, internal reviews, and compliance assessments related to CMMC and other regulatory frameworks. Research, evaluate, pilot, and implement new security solutions at a global enterprise scale, collaborating with vendors and stakeholders. Investigate security events and alerts from multiple log sources, performing end-to-end security investigations, and reporting actionable findings. Develop and manage the collection, reporting, and analysis of security events and metrics to drive continuous improvement. Participate in incident response processes and supporting light on-call pager duty rotations for critical issues.</p>
<p>Strengthen Synopsys’ global security posture by implementing advanced security controls and best practices. Ensure compliance with CMMC and other regulatory frameworks, enabling secure operations for critical projects. Protect sensitive data, intellectual property, and infrastructure against emerging cyber threats. Drive continuous improvement in security operations through data-driven analysis and proactive risk management. Enhance cross-functional collaboration between engineering, compliance, and executive teams to foster a culture of security awareness. Support innovation by enabling secure cloud implementations and supporting offensive security initiatives.</p>
<p>Bachelor’s degree in Computer Science, Cybersecurity, Information Systems, or related field required. 5+ years of hands-on experience with enterprise-grade security solutions (CASB, SSPM, WAF, firewalls, email security). 2+ years of experience installing, integrating, and deploying network security solutions. Solid understanding of Zero Trust security principles and architectures. Deep knowledge of CMMC regulations, technical data controls, and export authorization rules. Experience enforcing U.S. person-only access restrictions for controlled systems and datasets. Experience with external audits, internal reviews, and compliance assessments. Broad experience securing cloud implementations (AWS, GCP, Azure) and offensive security domains. Hands-on experience with Zscaler, Palo Alto Networks, ProofPoint, and other leading security platforms. Relevant certifications (CEH, CISSP, GIAC, OSCP, AWS Certified Advanced Networking, Security+) preferred. US citizenship or Green Card required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$101,000 - $152,000</Salaryrange>
      <Skills>CASB, SSPM, WAF, firewalls, email security, Zero Trust security principles, CMMC regulations, technical data controls, export authorization rules, cloud security, offensive security, incident response, Zscaler, Palo Alto Networks, ProofPoint, AWS, GCP, Azure, CEH, CISSP, GIAC, OSCP, AWS Certified Advanced Networking, Security+</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and intellectual property (IP) used in chip design, verification, and manufacturing.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/austin/senior-cybersecurity-engineer-15063/44408/91625669280</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9eb58719-bef</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>The Application Security team at Anthropic is at the forefront of building security into every phase of the software development lifecycle. In this hands-on technical role, you will partner closely with software engineers and researchers to ensure security is a core consideration from initial design through implementation.</p>
<p>You will lead threat modeling and secure design reviews to proactively identify and mitigate risks early, and help with continuous risk assessment. You will build tools and systems to support developers shipping code securely, adhering to secure coding best practices.</p>
<p>Your insights will shape our tooling, detection capabilities, and defenses against emerging threats to AI/ML. You&#39;ll develop the standards, processes, and educational resources that enable all Anthropic engineers to be security champions.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Help secure AI products and internal tools that are introducing industry-novel security risks and pushing established security boundaries</li>
<li>Lead “shift left” security efforts to build security into the software development lifecycle</li>
<li>Conduct secure design reviews and threat modeling. Identify and prioritise risks, attack surfaces, and vulnerabilities</li>
<li>Develop tooling to scale security code reviews and respond to developer questions, including advising developers on remediating vulnerabilities and following secure coding practices</li>
<li>Manage Anthropic&#39;s vulnerability management program, including integrating data ingestion pipelines, coding logic to prioritise vulnerability fixes, supporting teams remediating vulnerabilities and developing automated systems at scale</li>
<li>Oversee Anthropic&#39;s bug bounty program. Set scope, validate submissions, perform root cause analysis, coordinate remediation with engineering teams, and award bounties. Cultivate relationships with the ethical hacker community</li>
<li>Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development</li>
<li>Develop and document security policies, standards, and playbooks. Conduct security awareness training for engineers</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 5+ years of hands-on experience in application and infrastructure security, including securing cloud-based and containerized environments</li>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
<li>Lead with empathy, a collaborative spirit, and a learning mindset to work cross-functionally with engineers of all levels to build security into the software development life cycle</li>
<li>Leverage creative and strategic thinking to reduce risk through secure design and simplicity, not just controls</li>
<li>Possess broad security knowledge to connect the dots across domains and identify holistic ways to decrease the overall threat surface</li>
<li>Are keen to distill complex security concepts into clear actions and drive consensus without direct authority</li>
<li>Embody a proactive mindset to thread security throughout the product lifecycle through activities like threat modeling, secure code review, and education</li>
<li>Have a strong grasp of offensive security to anticipate risks from an adversary&#39;s perspective, not just check compliance boxes</li>
<li>Bring experience with modern application stacks, infrastructure, and security tools to implement pragmatic defenses</li>
<li>Are practiced at collaborating cross-functionally and effectively balancing security requirements with business objectives</li>
<li>Advocate for security fundamentals like least privilege, defence-in-depth, and eliminating complexity that could sub-linearly scale security through smart design</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Hands-on technical expertise securing complex cloud environments and microservices architectures leveraging technologies like Kubernetes, Docker, and AWS / GCP</li>
<li>Exposure to offensive security techniques like vulnerability testing, bug bounty, pen testing, and red team exercises</li>
<li>Familiarity with AI/ML security risks such as prompt injection, data poisoning, model extraction, etc. and mitigations</li>
<li>Experience building security tools, applications, and automated tools</li>
<li>Solid foundational knowledge of both software and security engineering principles and are keen to continue learning</li>
<li>Excellent communication skills, able to distill complex security topics for broad audiences</li>
<li>Worked and thrived in fast-paced environments, and comfortable navigating ambiguity</li>
</ul>
<p>The annual compensation range for this role is $300,000 - $405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>application security, infrastructure security, cloud security, containerized environments, secure coding practices, vulnerability management, bug bounty program, offensive security, modern application stacks, security tools, Kubernetes, Docker, AWS, GCP, Python, Rust, Go, Java, vulnerability testing, pen testing, red team exercises, AI/ML security risks, security tools, automated tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation developing reliable, interpretable, and steerable AI systems. The company&apos;s mission is to create safe and beneficial AI for users and society.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4502508008</Applyto>
      <Location>San Francisco, CA, Seattle, WA, New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>5fba9d7d-674</externalid>
      <Title>AI Security Fellow</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>AI Security at Anthropic</strong></p>
<p>We believe we are at an inflection point for AI&#39;s impact on cybersecurity. Models are now useful for cybersecurity tasks in practice: for example, Claude can now outperform human teams in some cybersecurity competitions and help us discover vulnerabilities in our own code.</p>
<p>We are looking for researchers and engineers to help us accelerate defensive use of AI to secure code and infrastructure.</p>
<p><strong>Anthropic Fellows Program Overview</strong></p>
<p>The Anthropic Fellows Program is designed to accelerate AI security and safety research, and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI security and safety for four months.</p>
<p>Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).</p>
<p>We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.</p>
<p><strong>What to Expect</strong></p>
<ul>
<li>Direct mentorship from Anthropic researchers</li>
<li>Access to a shared workspace (in either Berkeley, California or London, UK)</li>
<li>Connection to the broader AI safety research community</li>
<li>Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD &amp; access to benefits (benefits vary by country)</li>
<li>Funding for compute (~$15k/month) and other research expenses</li>
</ul>
<p><strong>Mentors, Research Areas, &amp; Past Projects</strong></p>
<p>Fellows will undergo a project selection &amp; mentor matching process. Potential mentors include:</p>
<ul>
<li>Nicholas Carlini</li>
<li>Keri Warr</li>
<li>Evyatar Ben Asher</li>
<li>Keane Lucas</li>
<li>Newton Cheng</li>
</ul>
<p>On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:</p>
<ul>
<li>AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng</li>
<li>Strengthening Red Teams: A Modular Scaffold for Control Evaluations: Chloe Loughridge et al., mentored by Jon Kutasov and Joe Benton</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Are motivated by reducing catastrophic risks from advanced AI systems</li>
<li>Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic</li>
</ul>
<p><strong>Please note:</strong></p>
<p>We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit here at Anthropic. In previous cohorts, over 40% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on safety at other organisations.</p>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Experience with pentesting, vulnerability research, or other offensive security</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Experience with empirical ML research projects</li>
<li>Experience with deep learning frameworks and experiment management</li>
</ul>
<p><strong>Candidates must be:</strong></p>
<ul>
<li>Fluent in Python programming</li>
<li>Available to work full-time on the Fellows program for 4 months</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Interview process</strong></p>
<p>The interview process will include an initial application &amp; references check, technical assessments &amp; interviews, and a research discussion.</p>
<p><strong>Compensation</strong></p>
<p>The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week, for 4 months (with possible extension).</p>
<p><strong>Logistics</strong></p>
<p>Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.</p>
<p>Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.</p>
<p>Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>3,850 USD / 2,310 GBP / 4,300 CAD per week</Salaryrange>
      <Skills>Python programming, AI security, Cybersecurity, Empirical research, Machine learning, Deep learning, Experiment management, Open-source projects, Pentesting, Vulnerability research, Offensive security, CVEs, Bug bounty vulnerabilities, Empirical ML research projects, Deep learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030244008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>45350b41-7eb</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
</ul>
<ul>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
</ul>
<ul>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
</ul>
<ul>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
</ul>
<ul>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
</ul>
<ul>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
</ul>
<ul>
<li>Are driven to find solutions to complex, high-stakes problems</li>
</ul>
<ul>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<ul>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
</ul>
<ul>
<li>Research or professional experience applying LLMs to security problems</li>
</ul>
<ul>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
</ul>
<ul>
<li>Experience building security tools or automation</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. The company has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b0cdccea-4ed</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs. We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, communication skills, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems. The company is headquartered in San Francisco, CA.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>3469e687-cba</externalid>
      <Title>Offensive Security Engineer, Agent Security</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Offensive Security Engineer, Agent Security</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco; New York City; Remote - US; Seattle; Washington, DC</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>San Francisco, Seattle, New York$347K – $490K • Offers Equity</li>
<li>Zone A$312.3K – $490K • Offers Equity</li>
<li>Zone B$277.6K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.</p>
<p><strong>About the Role</strong></p>
<p>We&#39;re seeking an exceptional Principal-level Offensive Security Engineer to challenge and strengthen OpenAI&#39;s security posture. This role isn&#39;t your typical red team job - it&#39;s an opportunity to engage broadly and deeply, craft innovative attack simulations, collaborate closely with defensive teams, and influence strategic security improvements across the organization.</p>
<p>You&#39;ll have the chance to not only find vulnerabilities but actively drive their resolution, automate offensive techniques with cutting-edge technologies, and use your unique attacker perspective to shape our security strategy.</p>
<p>This role will be primarily focused on continuously testing our agent powered products like codex and operator. These systems are uniquely valuable targets because they’re rapidly evolving, have access to perform sensitive actions on behalf of users, and have large, diverse attack surfaces. You will play a crucial role in securing our agents by hunting for realistic vulnerabilities that emerge from the interactions between the applications, infrastructure, and models that power them.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Continuously hunt for vulnerabilities in the interactions between the applications, infrastructure, and models that power our agentic products.</li>
</ul>
<ul>
<li>Conduct open-scope red and purple team operations, simulating realistic attack scenarios.</li>
</ul>
<ul>
<li>Collaborate proactively with defensive security teams to enhance detection, response, and mitigation capabilities.</li>
</ul>
<ul>
<li>Perform comprehensive penetration testing on our diverse suite of products.</li>
</ul>
<ul>
<li>Leverage advanced automation and OpenAI technologies to optimize your offensive security work.</li>
</ul>
<ul>
<li>Present insightful, actionable findings clearly and compellingly to inspire impactful change.</li>
</ul>
<ul>
<li>Influence security strategy by providing attacker-driven insights into risk and threat modeling.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>7+ years of hands-on red team experience or exceptional accomplishments demonstrating equivalent expertise.</li>
</ul>
<ul>
<li>Deep expertise conducting offensive security operations within modern technology companies.</li>
</ul>
<ul>
<li>Experience designing, developing, or testing assessing the security of AI-powered systems.</li>
</ul>
<ul>
<li>Experience working finding, exploiting and mitigating common vulnerabilities in AI systems like prompt injection, leaking sensitive data, confused deputies, and dynamically generated UI components.</li>
</ul>
<ul>
<li>Exceptional skill in code review, identifying novel and subtle vulnerabilities.</li>
</ul>
<ul>
<li>Proven experience performing offensive security assessments in at least one hyperscaler cloud environment (Azure preferred).</li>
</ul>
<ul>
<li>Demonstrated mastery assessing complex technology stacks, including:</li>
</ul>
<ul>
<li>Highly customized Kubernetes clusters</li>
</ul>
<ul>
<li>Container environments</li>
</ul>
<ul>
<li>CI/CD pipelines</li>
</ul>
<ul>
<li>GitHub security</li>
</ul>
<ul>
<li>macOS and Linux operating systems</li>
</ul>
<ul>
<li>Data science tooling and environments</li>
</ul>
<ul>
<li>Python-based web services</li>
</ul>
<ul>
<li>React-based frontend applications</li>
<li>Strong intuitive understanding of trust boundaries and risk assessment in dynamic contexts.</li>
</ul>
<ul>
<li>Excellent coding skills, capable of writing robust tools and automation for offensive operations.</li>
</ul>
<ul>
<li>Ability to communicate complex technical concepts to both technical and non-technical stakeholders.</li>
</ul>
<p><strong>Experience Level</strong></p>
<p>Senior</p>
<p><strong>Employment Type</strong></p>
<p>Full-time</p>
<p><strong>Workplace Type</strong></p>
<p>Remote</p>
<p><strong>Category</strong></p>
<p>Engineering</p>
<p><strong>Industry</strong></p>
<p>Technology</p>
<p><strong>Salary Range</strong></p>
<p>$347K – $490K • Offers Equity</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>Red team experience</li>
<li>Offensive security operations</li>
<li>AI-powered systems security</li>
<li>Vulnerability assessment</li>
<li>Penetration testing</li>
<li>Automation</li>
<li>Code review</li>
<li>Cloud security</li>
<li>Kubernetes</li>
<li>Container security</li>
<li>CI/CD pipelines</li>
<li>GitHub security</li>
<li>macOS and Linux operating systems</li>
<li>Data science tooling and environments</li>
<li>Python-based web services</li>
<li>React-based frontend applications</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Azure cloud security</li>
<li>Highly customized Kubernetes clusters</li>
<li>Container environments</li>
<li>CI/CD pipelines</li>
<li>GitHub security</li>
<li>macOS and Linux operating systems</li>
<li>Data science tooling and environments</li>
<li>Python-based web services</li>
<li>React-based frontend applications</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$347K – $490K • Offers Equity</Salaryrange>
      <Skills>red team experience, offensive security operations, AI-powered systems security, vulnerability assessment, penetration testing, automation, code review, cloud security, kubernetes, container security, ci/cd pipelines, github security, macos and linux operating systems, data science tooling and environments, python-based web services, react-based frontend applications, azure cloud security, highly customized kubernetes clusters, container environments, ci/cd pipelines, github security, macos and linux operating systems, data science tooling and environments, python-based web services, react-based frontend applications</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/bb97fffc-cdda-43a3-a6bc-234f9c031720</Applyto>
      <Location>San Francisco; New York City; Remote - US; Seattle; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>2752bb7f-0b9</externalid>
      <Title>Security Engineer, AI Security</Title>
      <Description><![CDATA[<p>We&#39;re seeking an offensive-minded Security Engineer to help secure AI-enabled systems, agents, and LLM-integrated workflows across EA&#39;s games, services, and enterprise platforms.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will work closely with Application Security and Red Team engineers, applying an attacker&#39;s mindset to AI systems while building scalable security testing, automation, and guardrails that meaningfully reduce risk.</p>
<ul>
<li>Perform security testing and reviews of AI-enabled applications, agents, and workflows, including architecture, design, and implementation analysis</li>
<li>Identify and validate vulnerabilities in LLM-based systems such as data leakage, insecure tool use, authentication gaps, and abuse paths</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Strong background in application security, offensive security, or a combination of both</li>
<li>Hands-on experience identifying and exploiting security weaknesses in modern applications and services</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>application security, offensive security, hands-on experience identifying and exploiting security weaknesses, experience assessing commercial AI platforms or enterprise AI services, familiarity with agent orchestration, tool calling, function execution, or multi-agent systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Security-Engineer/211803</Applyto>
      <Location>Orlando</Location>
      <Country></Country>
      <Postedate>2026-01-10</Postedate>
    </job>
  </jobs>
</source>