{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/it-development"},"x-facet":{"type":"skill","slug":"it-development","display":"It Development","count":6},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7cc85573-4a2"},"title":"Technical Policy Manager, Cyber Harms","description":"<p>We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.</p>\n<p>This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.</p>\n<p>In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.</p>\n<p>You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.</p>\n<p>You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.</p>\n<p>You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.</p>\n<p>You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.</p>\n<p>You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.</p>\n<p>You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.</p>\n<p>To be successful in this role, you will need to have:</p>\n<ul>\n<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>\n<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>\n<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>\n<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>\n<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>\n<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>\n<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>\n<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>\n<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>\n<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>\n<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>\n<li>Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Background in AI/ML systems, particularly experience with large language models</li>\n<li>Experience developing ML-based security systems or adversarial ML research</li>\n<li>Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)</li>\n<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>\n<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>\n<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>\n<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7cc85573-4a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066981008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Cybersecurity","Vulnerability research","Exploit development","Network security","Malware analysis","Penetration testing","Detection","Monitoring","Incident response","Scientific computing","Data analysis","Programming (Python)","Responsible disclosure practices","Vulnerability coordination","Cybersecurity frameworks (MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)"],"x-skills-preferred":["AI/ML systems","Large language models","ML-based security systems","Adversarial ML research","Defense, intelligence, or security organizations","Published security research","Disclosed vulnerabilities","Bug bounty programs","Trust & Safety operations","Content moderation at scale","Certifications (OSCP, OSCE, GXPN)"],"datePosted":"2026-04-18T15:56:47.739Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cybersecurity, Vulnerability research, Exploit development, Network security, Malware analysis, Penetration testing, Detection, Monitoring, Incident response, Scientific computing, Data analysis, Programming (Python), Responsible disclosure practices, Vulnerability coordination, Cybersecurity frameworks (MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, Large language models, ML-based security systems, Adversarial ML research, Defense, intelligence, or security organizations, Published security research, Disclosed vulnerabilities, Bug bounty programs, Trust & Safety operations, Content moderation at scale, Certifications (OSCP, OSCE, GXPN)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_62900fcd-562"},"title":"Security Engineer - Offensive Security","description":"<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>\n<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>\n<p>Responsibilities:</p>\n<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>\n<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>\n<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>\n<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>\n<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>\n<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>\n<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>\n<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>\n<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>\n<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>\n<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>\n<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>\n<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>\n<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_62900fcd-562","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7820898","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Go","Web application security","Cloud platforms (AWS, Azure, or GCP)","Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound)","Adversary tradecraft and frameworks (MITRE ATT&CK)","Excellent written and verbal communication skills"],"x-skills-preferred":["Experience conducting offensive security in fintech, financial services, or other highly regulated environments","Background in vulnerability research, exploit development, or CVE discovery","Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations)","Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support","Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"],"datePosted":"2026-04-18T15:51:01.913Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc0287c3-e30"},"title":"Research Engineer / Scientist, Frontier Red Team (Cyber)","description":"<p><strong>About the Role</strong></p>\n<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>\n<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>\n<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>\n<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>\n<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>\n<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>\n<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>\n</ul>\n<p><strong>Sample Projects</strong></p>\n<ul>\n<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>\n<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>\n<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>\n<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have deep expertise in cybersecurity or security research</li>\n<li>Are driven to find solutions to complex, high-stakes problems</li>\n<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>\n<li>Have strong software engineering skills, particularly in Python</li>\n<li>Can own entire problems end-to-end, including both technical and non-technical components</li>\n<li>Design and run experiments quickly, iterating fast toward useful results</li>\n<li>Thrive in collaborative environments</li>\n<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>\n<li>Are comfortable working on sensitive projects that require discretion and integrity</li>\n<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience with offensive security research, vulnerability research, or exploit development</li>\n<li>Research or professional experience applying LLMs to security problems</li>\n<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>\n<li>Experience building security tools or automation</li>\n<li>Track record of building demos or prototypes that communicate complex technical ideas</li>\n<li>Experience working with external stakeholders (policymakers, government, researchers)</li>\n<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc0287c3-e30","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5076477008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["cybersecurity","security research","LLM-based agents","autonomous systems","software engineering","Python","AI safety","threat modeling"],"x-skills-preferred":["offensive security research","vulnerability research","exploit development","research or professional experience applying LLMs to security problems","competitive CTFs","bug bounties","security tools or automation","demos or prototypes","external stakeholders","AI safety research"],"datePosted":"2026-04-18T15:43:56.704Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_090cc579-41b"},"title":"Senior Business Analyst","description":"<p>We are seeking a Senior Business Analyst to join our IPG Business &amp; Product Operations department. As a Senior Business Analyst, you will be responsible for directly engaging with internal customers to understand their needs, gathering detailed requirements, and prioritizing product features. You will also conduct thorough analyses, document findings, and recommend process and system improvements. Additionally, you will generate actionable insights and comprehensive reporting for business leaders, empowering data-driven decisions that optimize profitability.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Directly engaging with internal customers to understand their needs, gathering detailed requirements, and prioritizing product features.</li>\n<li>Translating customer requirements into user stories and collaborating with development teams to produce technical specifications and deliver solutions.</li>\n<li>Conducting thorough analyses, documenting findings, and recommending process and system improvements.</li>\n<li>Identifying and tracking key metrics to foster a culture of continuous process improvement.</li>\n<li>Generating actionable insights and comprehensive reporting for business leaders, empowering data-driven decisions that optimize profitability.</li>\n<li>Monitoring project activities, reporting status, identifying potential issues, and implementing corrective actions to ensure project milestones are met.</li>\n<li>Acting as the primary liaison between user groups and the IT development team, facilitating effective communication and solution delivery.</li>\n<li>Leading or participating in cross-functional project teams to develop, test, and implement new business models, system upgrades, enhancements, and bug fixes.</li>\n<li>Coordinating and performing user acceptance testing (UAT) with technical and cross-functional teams.</li>\n<li>Maintaining and updating user guides and system solution documentation to support user adoption and compliance.</li>\n<li>Developing and delivery end user training.</li>\n<li>Mentoring junior peers and networking with senior professionals in your area of expertise.</li>\n</ul>\n<p>As a Senior Business Analyst, you will have the opportunity to enable the Synopsys IP Group to scale and innovate by ensuring robust, efficient, and user-friendly business systems. You will drive continuous process improvements that enhance business performance and operational excellence. You will provide critical insights and analytics that empower leadership to make strategic, data-driven decisions. You will facilitate seamless integration and collaboration across departments and global teams. You will support organizational growth by anticipating and resolving system challenges and recommending scalable solutions. You will promote a culture of knowledge-sharing and continuous learning within the team and broader organization.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_090cc579-41b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/porto-salvo/senior-business-analyst/44408/92616532976","x-work-arrangement":null,"x-experience-level":"senior","x-job-type":"employee","x-salary-range":null,"x-skills-required":["Business Analysis","Product Ownership","Project/Program Management","IT Development","Clarity Project & Portfolio Management (PPM)","Confluence","Jira","Agile development methods"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:21:04.915Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Porto Salvo"}},"occupationalCategory":"operations","industry":"technology","skills":"Business Analysis, Product Ownership, Project/Program Management, IT Development, Clarity Project & Portfolio Management (PPM), Confluence, Jira, Agile development methods"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c76d0c6d-ec7"},"title":"Technical Policy Manager, Cyber Harms","description":"<p><strong>About the Role:</strong></p>\n<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>\n<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>\n<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>\n<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>\n<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>\n<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>\n<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>\n<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>\n<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>\n<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>\n<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>\n<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>\n<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>\n<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>\n<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>\n<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>\n<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>\n<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>\n<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>\n<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>\n<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>\n<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>\n<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>\n<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>\n<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Background in AI/ML systems, particularly experience with large language models</li>\n<li>Experience developing ML-based security systems or adversarial ML research</li>\n<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>\n<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>\n<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>\n<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>\n<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c76d0c6d-ec7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066981008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The annual compensation for this role is not specified in the job posting.","x-skills-required":["cybersecurity","vulnerability research","exploit development","network security","malware analysis","penetration testing","scientific computing","data analysis","programming (Python)","threat modelling","policy frameworks","responsible disclosure practices","vulnerability coordination","cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)"],"x-skills-preferred":["AI/ML systems","large language models","ML-based security systems","adversarial ML research","defence, intelligence, or security organisations","NSA, CISA, national labs, security contractors","published security research","disclosed vulnerabilities","bug bounty programs","Trust & Safety operations","content moderation at scale","OSCP, OSCE, GXPN, or equivalent certifications","dual-use security research concerns","ethical considerations in AI safety"],"datePosted":"2026-03-08T13:50:25.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust & Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45350b41-7eb"},"title":"Research Engineer / Scientist, Frontier Red Team (Cyber)","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Team</strong></p>\n<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>\n<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>\n<p><strong>About the Role</strong></p>\n<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>\n<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>\n<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>\n</ul>\n<ul>\n<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>\n</ul>\n<ul>\n<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>\n</ul>\n<ul>\n<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>\n</ul>\n<ul>\n<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>\n</ul>\n<p><strong>Sample Projects</strong></p>\n<ul>\n<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>\n</ul>\n<ul>\n<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>\n</ul>\n<ul>\n<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>\n</ul>\n<ul>\n<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have deep expertise in cybersecurity or security research</li>\n</ul>\n<ul>\n<li>Are driven to find solutions to complex, high-stakes problems</li>\n</ul>\n<ul>\n<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>\n</ul>\n<ul>\n<li>Have strong software engineering skills, particularly in Python</li>\n</ul>\n<ul>\n<li>Can own entire problems end-to-end, including both technical and non-technical components</li>\n</ul>\n<ul>\n<li>Design and run experiments quickly, iterating fast toward useful results</li>\n</ul>\n<ul>\n<li>Thrive in collaborative environments</li>\n</ul>\n<ul>\n<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>\n</ul>\n<ul>\n<li>Are comfortable working on sensitive projects that require discretion and integrity</li>\n</ul>\n<ul>\n<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience with offensive security research, vulnerability research, or exploit development</li>\n</ul>\n<ul>\n<li>Research or professional experience applying LLMs to security problems</li>\n</ul>\n<ul>\n<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>\n</ul>\n<ul>\n<li>Experience building security tools or automation</li>\n</ul>\n<ul>\n<li>Track record of building demos or prototypes that communicate complex technical ideas</li>\n</ul>\n<ul>\n<li>Experience working with external stakeholders (policymakers, government, researchers)</li>\n</ul>\n<ul>\n<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45350b41-7eb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5076477008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $850,000USD","x-skills-required":["cybersecurity","security research","LLM-based agents","autonomous systems","Python","software engineering"],"x-skills-preferred":["offensive security research","vulnerability research","exploit development","AI safety research","threat modeling"],"datePosted":"2026-03-08T13:46:35.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":850000,"unitText":"YEAR"}}}]}