{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/influence-operations"},"x-facet":{"type":"skill","slug":"influence-operations","display":"Influence Operations","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_536aa8eb-f7c"},"title":"Technical Influence Operations Threat Investigator","description":"<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation.</p>\n<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>\n<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behavior, astroturfing, and narrative manipulation campaigns</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover coordinated networks of threat actors conducting influence operations</li>\n</ul>\n<ul>\n<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioral clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>\n</ul>\n<ul>\n<li>Monitor and analyze state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>\n</ul>\n<ul>\n<li>Forecast how advances in AI technology,including improved content generation, voice synthesis, and multimodal capabilities,will reshape the influence operations landscape and inform safety-by-design strategies</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>\n</ul>\n<ul>\n<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<ul>\n<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>\n</ul>\n<ul>\n<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>\n</ul>\n<ul>\n<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>\n</ul>\n<ul>\n<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>\n</ul>\n<ul>\n<li>Familiarity with social network analysis techniques and tools for mapping coordinated behavior</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_536aa8eb-f7c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5140239008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000-$290,000 USD","x-skills-required":["Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare","Proficiency in SQL and Python for data analysis and threat detection","Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations","Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations","Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems"],"x-skills-preferred":["Experience at a major technology platform working on influence operations, platform integrity, or content authenticity","Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts","Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions","Familiarity with social network analysis techniques and tools for mapping coordinated behavior","Background in AI safety, machine learning security, or technology abuse investigation"],"datePosted":"2026-04-18T15:54:54.163Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare, Proficiency in SQL and Python for data analysis and threat detection, Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations, Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations, Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems, Experience at a major technology platform working on influence operations, platform integrity, or content authenticity, Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts, Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions, Familiarity with social network analysis techniques and tools for mapping coordinated behavior, Background in AI safety, machine learning security, or technology abuse investigation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ee98770-f81"},"title":"Technical Influence Operations Threat Investigator","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behaviour, and other forms of information manipulation.</p>\n<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>\n<p>_Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays._</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behaviour, astroturfing, and narrative manipulation campaigns</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyse large datasets, trace user behaviour patterns, and uncover coordinated networks of threat actors conducting influence operations</li>\n</ul>\n<ul>\n<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioural clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>\n</ul>\n<ul>\n<li>Monitor and analyse state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>\n</ul>\n<ul>\n<li>Forecast how advances in AI technology—including improved content generation, voice synthesis, and multimodal capabilities—will reshape the influence operations landscape and inform safety-by-design strategies</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behaviour, disinformation, or information warfare</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>\n</ul>\n<ul>\n<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<ul>\n<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>\n</ul>\n<ul>\n<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>\n</ul>\n<ul>\n<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>\n</ul>\n<ul>\n<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>\n</ul>\n<ul>\n<li>Familiarity with social network analysis techniques and tools for mapping coordinated behaviour</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration attorney to assist with the process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ee98770-f81","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5140239008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000 - $290,000 USD","x-skills-required":["SQL","Python","influence operations","disinformation","coordinated inauthentic behaviour","astroturfing","narrative manipulation campaigns","large language models","open-source intelligence (OSINT) methodologies","social network analysis techniques"],"x-skills-preferred":["fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic","background in intelligence analysis, information operations, or counter-disinformation","experience building and scaling threat detection systems or abuse monitoring programs"],"datePosted":"2026-03-08T13:47:58.152Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, influence operations, disinformation, coordinated inauthentic behaviour, astroturfing, narrative manipulation campaigns, large language models, open-source intelligence (OSINT) methodologies, social network analysis techniques, fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic, background in intelligence analysis, information operations, or counter-disinformation, experience building and scaling threat detection systems or abuse monitoring programs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93c50f21-80e"},"title":"Strategic Risk Analyst","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Strategic Risk Analyst</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$198K – $320K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analysing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p>We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.</p>\n<p><strong>About the role</strong></p>\n<p>As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesise internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritisation inputs</p>\n<p>You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Monitor and analyse internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.</li>\n</ul>\n<ul>\n<li>Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distil implications for OpenAI’s products and threat landscape.</li>\n</ul>\n<ul>\n<li>Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.</li>\n</ul>\n<ul>\n<li>Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early.</li>\n</ul>\n<ul>\n<li>Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.</li>\n</ul>\n<ul>\n<li>Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.</li>\n</ul>\n<ul>\n<li>Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.</li>\n</ul>\n<ul>\n<li>Build early-warning and monitoring capabilities with data, engineering, and visualisation partners, including dashboards that highlight leading indicators and unusual changes.</li>\n</ul>\n<ul>\n<li>Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.</li>\n</ul>\n<ul>\n<li>Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp.</li>\n</ul>\n<p><strong>You might thrive in this role if you</strong></p>\n<ul>\n<li>Significant experience (typically <strong>5+ years</strong>) in trust and safety, integrity, security, policy analysis, or intelligence work.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyse complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritised recommendations.</li>\n</ul>\n<ul>\n<li>Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.</li>\n</ul>\n<ul>\n<li>Comfort working across qualitative and quantitative inputs, including (1) casework,</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93c50f21-80e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/d821a725-671f-4327-b918-9be90ef7be45","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$198K – $320K • Offers Equity","x-skills-required":["trust and safety","integrity","security","policy analysis","intelligence work","online harms","AI-enabled misuse","harassment","coordinated abuse","scams","synthetic media","influence operations","brand safety issues"],"x-skills-preferred":["data analysis","data visualisation","machine learning","natural language processing","software development"],"datePosted":"2026-03-06T18:42:41.351Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, integrity, security, policy analysis, intelligence work, online harms, AI-enabled misuse, harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues, data analysis, data visualisation, machine learning, natural language processing, software development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_036ee6e8-526"},"title":"Abuse Investigator (National Security)","description":"<p><strong>Location</strong></p>\n<p>Remote - US; London, UK; San Francisco; Seattle</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Remote</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230.4K – $425K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>\n<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting malicious uses and activities of our platform and disrupting actors that abuse our policies and other harmful behavior. This will require expert understanding of our products and data and experience investigating threat actors. You will also respond to time sensitive escalations, especially those that are not caught by our existing tools and safeguards.</p>\n<p>This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment. You’ll need a proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.</p>\n<p>This role is remote-friendly, though you’re welcome to work from our San Francisco office if desired. The role will include resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Investigate activity and disrupt abusive operations in partnership with our policy, legal, integrity, global affairs and security teams, including by conducting cross-internet and open source research</li>\n</ul>\n<ul>\n<li>Develop abuse signals and tracking strategies to help proactively detect harmful activity on our platform</li>\n</ul>\n<ul>\n<li>Communicate investigation findings from your work with stakeholders internally and, at times, externally</li>\n</ul>\n<ul>\n<li>Develop a categorical understanding of our products and data, and work with technical teams to improve our data and tooling</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep expertise in open source intelligence and subject matter expertise in national security and/or influence operations, particularly where it intersects with emerging technical risks.</li>\n</ul>\n<ul>\n<li>Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military, think tank setting, and/or tech company.</li>\n</ul>\n<ul>\n<li>Speak another language (ideally Chinese, Arabic, Farsi, Hindi), in addition to English.</li>\n</ul>\n<ul>\n<li>Have at least 10+ years of experience tracking threat actors in abuse domains.</li>\n</ul>\n<ul>\n<li>Have at least two years of experience helping to develop automated approaches to accomplishing your work.</li>\n</ul>\n<ul>\n<li>Experience in presenting analytic work in public or policy settings.</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models.</li>\n</ul>\n<ul>\n<li>Be able to hold a government security clearance.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_036ee6e8-526","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/aede8703-4ee6-437a-aa38-16787ed7f202","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230.4K – $425K","x-skills-required":["open source intelligence","SQL","Python","national security","influence operations","language models","government security clearance"],"x-skills-preferred":["technical investigations","cross-internet and open source research","abuse signals and tracking strategies","categorical understanding of products and data"],"datePosted":"2026-03-06T18:39:51.907Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; London, UK; San Francisco; Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"open source intelligence, SQL, Python, national security, influence operations, language models, government security clearance, technical investigations, cross-internet and open source research, abuse signals and tracking strategies, categorical understanding of products and data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230400,"maxValue":425000,"unitText":"YEAR"}}}]}