{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/adversarial-thinking"},"x-facet":{"type":"skill","slug":"adversarial-thinking","display":"Adversarial Thinking","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ef383eb-d73"},"title":"Abuse Investigator (CBRN)","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>\n<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting misuse of our platform or services. Specifically, you will focus on cases where users attempt to use our platform in connection with prohibited activities such as developing or delivering biological and/or chemical threats to harm people, critical resources/infrastructure, or the environment. OpenAI has strict prohibitions and policies in this area, and you will detect, disrupt, and enforce on actors who violate our policies.</p>\n<p>This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment.</p>\n<p>You will respond to time-sensitive escalations and will be expected to present your investigative work, both in writing and verbally, to key stakeholders across government, industry, and civil society, when required. You will also help inform the company’s evolving threat response and integrity monitoring and mitigation stack, while working closely on individual cases and enforcement assessments.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Detect, investigate, and disrupt the attempted misuse of OpenAI products for the development or dissemination of biological threats, including dual-use misuse and emerging biothreat vectors. You will also be expected to work across related domains (e.g., chemical threats).</li>\n</ul>\n<ul>\n<li>Partner closely with teams across Policy, Legal, Integrity, Global Affairs, and Security to conduct robust investigations, including cross-internet and open-source research to trace and understand abuse and ensure OpenAI’s mitigations address evolving needs in the space.</li>\n</ul>\n<ul>\n<li>Develop abuse signals and tracking strategies to proactively detect users attempting dual-use or biohazard-related misuse of our platform and review content for enforcement decisions.</li>\n</ul>\n<ul>\n<li>Communicate findings from your investigations with internal stakeholders and leadership and, at times, external partners including regulatory or scientific organizations.</li>\n</ul>\n<ul>\n<li>Develop a categorical understanding of our product surfaces in the biosecurity space, and work with teams to improve data visibility and internal tooling.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have industry-leading experience in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), or related biodefense fields,</li>\n</ul>\n<ul>\n<li>Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military and/or tech company</li>\n</ul>\n<ul>\n<li>Have demonstrated experience in risk-mitigation (e.g., adversarial thinking and record of success in threat mitigation)</li>\n</ul>\n<ul>\n<li>Have worked on investigations related to biological threat actors, malicious dual-use exploitation, or responsible innovation in synthetic biology or bioengineering</li>\n</ul>\n<ul>\n<li>Have at least 5+ years of experience tracking misuse and/or abuse in biosecurity or life sciences domains, or equivalent education in these domains</li>\n</ul>\n<ul>\n<li>Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems</li>\n</ul>\n<ul>\n<li>Experience in presenting analytical work in public or policy settings</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protec</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ef383eb-d73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/5d618f84-fcce-496c-bfe9-995bd9ff9065","x-work-arrangement":"Remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$230.4K – $425K","x-skills-required":["biosecurity","biological weapons non-proliferation","dual-use research of concern (DURC)","biodefense","SQL","Python","risk-mitigation","adversarial thinking","threat mitigation","biological threat actors","malicious dual-use exploitation","responsible innovation in synthetic biology or bioengineering","misuse and/or abuse in biosecurity or life sciences domains","innovative detection solutions","open-ended research","analytical work in public or policy settings","scaling and automating processes","language models"],"x-skills-preferred":[],"datePosted":"2026-03-08T22:14:53.880Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; San Francisco; Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, SQL, Python, risk-mitigation, adversarial thinking, threat mitigation, biological threat actors, malicious dual-use exploitation, responsible innovation in synthetic biology or bioengineering, misuse and/or abuse in biosecurity or life sciences domains, innovative detection solutions, open-ended research, analytical work in public or policy settings, scaling and automating processes, language models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230400,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b39d0db-e3b"},"title":"Product Manager, Safety Systems","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Product Manager, Safety Systems</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Product Management</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>Safety Systems manages the complete lifecycle of safety efforts for OpenAI’s frontier models, ensuring our models are deployed responsibly and have a positive impact on society. Our work spans diverse research and engineering initiatives—from system-level safeguards and model training to evaluation and red-teaming—all aimed at mitigating misuse and maintaining our high bar for safety. We lead OpenAI&#39;s commitment to developing and deploying safe Artificial General Intelligence (AGI), fostering a culture of trust, responsibility, and transparency.</p>\n<p>Our goal is to continuously learn from deployments, distribute AI’s benefits widely, and ensure that powerful tools remain aligned with human values and safety considerations.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Product Manager on the Safety Systems team, you will drive initiatives which ensure that OpenAI’s frontier model deployments are safe, impactful, and aligned with user needs and technical innovation. You will clarify strategic priorities, develop safety-focused product roadmaps, and collaborate closely with AI researchers, software engineers, policy experts, and cross-functional partners. This role suits a proactive, technically skilled product manager adept at adversarial thinking and excited to tackle challenging, ambiguous problems through structured analysis and collaborative decision-making.</p>\n<p>This position is based in San Francisco, CA, with relocation assistance available.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Partner closely with research, engineering, data science, policy teams, and other stakeholders to embed safety throughout the development and deployment of frontier AI models.</li>\n</ul>\n<ul>\n<li>Develop comprehensive frameworks for understanding and mitigating deployment safety risks, drawing on data analysis, expert consultation, and adversarial assessments.</li>\n</ul>\n<ul>\n<li>Define strategic priorities and product roadmaps focused on improving deployment safety, enhancing reliability, and managing emerging AI capabilities.</li>\n</ul>\n<ul>\n<li>Create scalable methodologies, tools, and processes for evaluating, refining, and continuously improving our safety systems.</li>\n</ul>\n<ul>\n<li>Establish repeatable processes to integrate cutting-edge AI safety research into OpenAI’s models and product offerings.</li>\n</ul>\n<ul>\n<li>Develop and continuously refine clear, actionable metrics that effectively capture safety performance and user experience at scale.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 6+ years of product management or related industry roles, with specific expertise in AI safety, trust &amp; safety, integrity, or related domains.</li>\n</ul>\n<ul>\n<li>Are deeply curious and interested in interdisciplinary fields such as human-computer interaction, psychology, philosophy, or similar areas.</li>\n</ul>\n<ul>\n<li>Have hands-on experience driving consensus and action in ambiguous spaces.</li>\n</ul>\n<ul>\n<li>Excel at identifying and challenging underlying assumptions and constraints through insightful questioning.</li>\n</ul>\n<ul>\n<li>Are highly effective at cross-functional collaboration and communicating complex technical concepts clearly and persuasively.</li>\n</ul>\n<ul>\n<li>Enjoy working in a fast-paced, high-growth environment.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b39d0db-e3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f6c971e8-453f-4fd1-acc8-aa57e8bd4007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $385K • Offers Equity","x-skills-required":["product management","AI safety","trust & safety","integrity","human-computer interaction","psychology","philosophy","data analysis","expert consultation","adversarial assessments","strategic priorities","product roadmaps","cross-functional collaboration","communication"],"x-skills-preferred":["adversarial thinking","structured analysis","collaborative decision-making","fast-paced environment","high-growth environment"],"datePosted":"2026-03-06T18:42:17.111Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product management, AI safety, trust & safety, integrity, human-computer interaction, psychology, philosophy, data analysis, expert consultation, adversarial assessments, strategic priorities, product roadmaps, cross-functional collaboration, communication, adversarial thinking, structured analysis, collaborative decision-making, fast-paced environment, high-growth environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1149c30-15f"},"title":"Threat Modeler, Preparedness","description":"<p><strong>Threat Modeler, Preparedness</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.</p>\n<p>Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.</p>\n<p>The mission of the Preparedness team is to:</p>\n<ol>\n<li>Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society</li>\n</ol>\n<ol>\n<li>Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems</li>\n</ol>\n<p>Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.</p>\n<p><strong>About the Role</strong></p>\n<p>As a threat modeler, you will own OpenAI’s holistic approach to identifying, modeling, and forecasting frontier risks from frontier AI systems. This role ensures that our evaluation frameworks, safeguards, and taxonomies are robust, high-coverage, and forward-looking. You will help the company answer the “why” behind our most stringent risk-prevention efforts, shaping the rationale for prioritizing and mitigating risks across domains. You will serve as a central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to frontier risks from AI.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Develop and maintain comprehensive threat models across all misuse areas (bio, cyber, attack planning, etc.)</li>\n</ul>\n<ul>\n<li>Develop plausible and convincing threat models across loss of control, self-improvement, and other possible alignment risks from frontier AI systems</li>\n</ul>\n<ul>\n<li>Forecast risks by combining technical foresight, adversarial simulation, and emerging trends</li>\n</ul>\n<ul>\n<li>Pair closely with technical partners on capability evaluations to ensure these map to and cover the gambit of severe risks differentially enabled by frontier AI systems</li>\n</ul>\n<ul>\n<li>Pair closely with Bio and Cyber Leads to size the remaining risk of the designed safeguards and translate threat models into actionable mitigation designs</li>\n</ul>\n<ul>\n<li>Act as the thought partner and explainer of “why” and “when” for high-investment mitigation efforts—helping stakeholders understand the rationale behind prioritization</li>\n</ul>\n<ul>\n<li>Serve as the central node connecting technical, governance, and policy perspectives on prioritization, focus and rationale on our approach to misuse risk</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Understand risks from frontier AI systems and have a strong grasp of AI alignment literature</li>\n</ul>\n<p>Bring deep experience in threat modeling, risk analysis, or adversarial thinking (e.g., security, national security, or safety)</p>\n<ul>\n<li>Know how AI evaluations work and can connect eval results to both capability testing and safeguard sufficiency</li>\n</ul>\n<ul>\n<li>Enjoy working across technical and policy domains to drive rigorous, multidisciplinary risk assessments</li>\n</ul>\n<ul>\n<li>Communicate complex risks clearly and compellingly to both technical and non-technical audiences</li>\n</ul>\n<ul>\n<li>Think in systems and naturally anticipate second-order and cascading risks</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and that requires a deep understanding of the potential risks and benefits of AI.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1149c30-15f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f735a48e-c3c2-4387-abf7-7b39452e1ec5","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$325K • Offers Equity","x-skills-required":["threat modeling","risk analysis","adversarial thinking","AI alignment literature","AI evaluations","capability testing","safeguard sufficiency"],"x-skills-preferred":["security","national security","safety","technical writing","communication"],"datePosted":"2026-03-06T18:40:46.437Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"threat modeling, risk analysis, adversarial thinking, AI alignment literature, AI evaluations, capability testing, safeguard sufficiency, security, national security, safety, technical writing, communication","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ffe061d1-9b2"},"title":"Cybersecurity Landscape Analyst","description":"<p><strong>Cybersecurity Landscape Analyst</strong></p>\n<p><strong>About the Team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem in close collaboration with our internal and external partners. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p>The Strategic Intelligence &amp; Analysis (SIA) team provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Our work informs safety mitigations, product decisions, and partnerships, ensuring OpenAI’s tools are deployed securely and responsibly across critical sectors.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a Cybersecurity Landscape Analyst to help OpenAI understand how the external cyber threat environment is evolving—and what it means for our products, customers, and the broader AI ecosystem.</p>\n<p>This is an outward-facing intelligence and analysis role. The Cybersecurity Landscape Analyst monitors emerging attacker TTPs, threat-group behaviors, infrastructure trends, and real-world cyber innovation at the intersection of AI and all cyber threat surfaces including devices and robotics. Using structured research, competitive intelligence, adversarial thinking, and scenario analysis, you will stress-test assumptions about how frontier AI capabilities could be misused, targeted, or integrated into broader cyber campaigns—even in the absence of active warnings or internal incidents.</p>\n<p>This role does not conduct internal investigations, run detection on platform data, or own OpenAI’s infrastructure protection or incident response. Instead, this role translates the external cyber landscape into clear risk context, strategic foresight, and decision support for internal stakeholders, with defined handoffs into operational, detection, and security teams. While not the owner of those functions, the role works closely across cross-functional teams, drawing on their operational perspectives to sharpen external analysis, while bringing to them external insights, threat trends, and insights regarding attacker innovation to inform priorities and preparedness. In other words, this role sits at the boundary between external intelligence and internal execution, ensuring bi-directional flow between strategic cyber analysis and the teams responsible for implementation. Your work will synthesize signals from external sources alongside insights from Integrity, Security, and Safety Systems teams to produce crisp strategic assessments, priority questions, and actionable recommendations.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Monitor and interpret the evolving cyber threat landscape</li>\n</ul>\n<ul>\n<li>Track emerging cyber TTPs, attacker innovation, threat-group behavior, and ecosystem-level shifts relevant to AI systems.</li>\n</ul>\n<ul>\n<li>Analyze how state actors, criminal networks, hacktivists, and hybrid actors are adapting AI tools—or targeting AI infrastructure.</li>\n</ul>\n<ul>\n<li>Identify structural risk patterns that may affect AI providers, customers, and downstream sectors.</li>\n</ul>\n<ul>\n<li>Conduct structured external research and adversarial analysis</li>\n</ul>\n<ul>\n<li>Use competitive intelligence, red-team style thinking, and scenario methods to explore how frontier AI capabilities could be exploited or targeted.</li>\n</ul>\n<ul>\n<li>Develop forward-looking assessments of how cyber threats may evolve over 6–24 months.</li>\n</ul>\n<ul>\n<li>Surface “unknown unknowns” and stress-test prevailing assumptions about attacker incentives, constraints, and capabilities.</li>\n</ul>\n<ul>\n<li>Translate external signals into strategic risk context for cross-functional teammates</li>\n</ul>\n<ul>\n<li>Produce concise, executive-ready intelligence estimates that articulate threat relevance, potential impact pathways, and confidence levels.</li>\n</ul>\n<ul>\n<li>Develop priority questions and structured risk frames that inform product, safety, security, and</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ffe061d1-9b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/40c2a105-e7a2-4f2a-9376-aa75782e668c","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$178.2K – $320K","x-skills-required":["Cybersecurity","Threat analysis","Intelligence analysis","Adversarial thinking","Scenario analysis","Competitive intelligence","Red-team style thinking","AI systems","Cyber threat surfaces","Devices and robotics"],"x-skills-preferred":["Structured research","External research","Threat-group behavior","Ecosystem-level shifts","State actors","Criminal networks","Hacktivists","Hybrid actors","AI tools","AI infrastructure"],"datePosted":"2026-03-06T18:35:14.020Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cybersecurity, Threat analysis, Intelligence analysis, Adversarial thinking, Scenario analysis, Competitive intelligence, Red-team style thinking, AI systems, Cyber threat surfaces, Devices and robotics, Structured research, External research, Threat-group behavior, Ecosystem-level shifts, State actors, Criminal networks, Hacktivists, Hybrid actors, AI tools, AI infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":178200,"maxValue":320000,"unitText":"YEAR"}}}]}