{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/abuse"},"x-facet":{"type":"skill","slug":"abuse","display":"Abuse","count":40},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_db7c82e6-2fb"},"title":"Engineering Manager - Cloudforce One","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a member of the Cloudforce One Legal Response team, you will be responsible for services to enable Cloudflare’s abuse report pipeline, including the processing of reports, communications with submitters and customers, and remediations on abusive Cloudflare entities.</p>\n<p>You will lead a team of at least 5 engineers and have the opportunity to hire additional team members to continue building out this team. Your team will work in tandem with other Cloudforce One and Trust &amp; Safety teams to accomplish its mission in addition to the Cloudflare developers of the services that may be abused.</p>\n<p>Responsibilities</p>\n<p>Not only will you be responsible for ensuring the engineering standards and compliance of the services owned by your team, but you will also be responsible for the professional development of your direct reports as well as collaboration with key stakeholders in the Trust and Safety teams.</p>\n<p>Desirable skills and experience</p>\n<ul>\n<li>Prior experience working with abuse requirements in a regulated environment or compliance domain</li>\n<li>8+ years of Software Development</li>\n<li>Demonstrable experience leading a geographically distributed team</li>\n<li>Empathetic, proactive, and constructive communication skills, verbal and written</li>\n<li>Excellent operational principles (observability, alerting, tracing, incident management, SLIs &amp; SLOs, capacity planning, etc.)</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_db7c82e6-2fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7167621","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Software Development","Leadership","Communication","Operational Principles","Abuse Requirements","Regulated Environment"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:40.876Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software Development, Leadership, Communication, Operational Principles, Abuse Requirements, Regulated Environment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_536aa8eb-f7c"},"title":"Technical Influence Operations Threat Investigator","description":"<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation.</p>\n<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>\n<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behavior, astroturfing, and narrative manipulation campaigns</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover coordinated networks of threat actors conducting influence operations</li>\n</ul>\n<ul>\n<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioral clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>\n</ul>\n<ul>\n<li>Monitor and analyze state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>\n</ul>\n<ul>\n<li>Forecast how advances in AI technology,including improved content generation, voice synthesis, and multimodal capabilities,will reshape the influence operations landscape and inform safety-by-design strategies</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>\n</ul>\n<ul>\n<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<ul>\n<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>\n</ul>\n<ul>\n<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>\n</ul>\n<ul>\n<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>\n</ul>\n<ul>\n<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>\n</ul>\n<ul>\n<li>Familiarity with social network analysis techniques and tools for mapping coordinated behavior</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_536aa8eb-f7c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5140239008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000-$290,000 USD","x-skills-required":["Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare","Proficiency in SQL and Python for data analysis and threat detection","Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations","Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations","Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems"],"x-skills-preferred":["Experience at a major technology platform working on influence operations, platform integrity, or content authenticity","Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts","Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions","Familiarity with social network analysis techniques and tools for mapping coordinated behavior","Background in AI safety, machine learning security, or technology abuse investigation"],"datePosted":"2026-04-18T15:54:54.163Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare, Proficiency in SQL and Python for data analysis and threat detection, Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations, Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations, Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems, Experience at a major technology platform working on influence operations, platform integrity, or content authenticity, Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts, Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions, Familiarity with social network analysis techniques and tools for mapping coordinated behavior, Background in AI safety, machine learning security, or technology abuse investigation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_36152c4f-9b4"},"title":"Safeguards Enforcement Lead, Frontier Abuse Enforcement","description":"<p>About the role</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p>Responsibilities:</p>\n<p>Own the end-to-end enforcement strategy against unauthorized frontier model abuse , from detection signal development through enforcement action and post-enforcement measurement</p>\n<p>Operationalize detection and review pipelines , translating leads from case investigations and detection outputs into structured review workflows that scale across surfaces</p>\n<p>Drive enforcement actions for high-priority actors , synthesizing signals from across intelligence, detection, and review into enforcement packages suitable for formal escalation</p>\n<p>Partner with Legal and Policy to assess case strength, characterize ToS and IP violations, and support enforcement escalations through formal channels</p>\n<p>Close the loop between enforcement outcomes and upstream improvements , channeling review results and enforcement findings back to policy updates and detection refinements</p>\n<p>Develop and maintain a dynamic enforcement framework that accounts for the complexity of cross-surface enforcement , including varied escalation paths, partner coordination, and enforcement consistency across surfaces</p>\n<p>Collaborate with Threat Intelligence, Research, Engineering, and Policy partners to ensure detection coverage keeps pace with evolving frontier abuse tactics</p>\n<p>Maintain rigorous documentation of enforcement decisions, pipeline logic, and precedents to build institutional knowledge</p>\n<p>Qualifications:</p>\n<p>Required</p>\n<p>5+ years of experience in trust &amp; safety, abuse enforcement, fraud investigation, policy, or a related field , with demonstrated ownership of complex, high-stakes enforcement programs</p>\n<p>Track record of building detection and enforcement approaches for novel or emerging abuse vectors where established playbooks don&#39;t exist</p>\n<p>Experience supporting or directly contributing to formal enforcement actions , including case documentation, evidence packaging, and escalation coordination</p>\n<p>Strong data analysis skills , comfortable navigating complex, multi-table datasets to surface behavioral patterns and support investigations</p>\n<p>Experience conducting structured investigations, including open-source intelligence techniques and cross-referencing external data sources to attribute activity</p>\n<p>Demonstrated ability to translate ambiguous policy questions into defensible enforcement decisions and clear written findings</p>\n<p>Strong written and verbal communication skills , able to present complex enforcement cases clearly to stakeholders across Legal, Policy, and Engineering</p>\n<p>Preferred</p>\n<p>Familiarity with the AI/ML ecosystem , including how model distillation, fine-tuning, and synthetic data generation work in practice, and how actors attempt to obscure this activity</p>\n<p>Experience conducting threat actor profiling or open-source investigations in a trust &amp; safety, intelligence, or legal context</p>\n<p>Experience working with generative AI products, including using AI tools to accelerate investigative and analytical workflows</p>\n<p>Background or interest in AI policy, IP enforcement, competitive intelligence, or AI governance</p>\n<p>Experience coordinating with external enforcement partners or platform partners on escalated enforcement actions</p>\n<p>Education: At least a Bachelor&#39;s degree in a relevant field, or equivalent experience.</p>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $230,000-$270,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_36152c4f-9b4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5162211008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000-$270,000 USD","x-skills-required":["trust & safety","abuse enforcement","fraud investigation","policy","data analysis","structured investigations","open-source intelligence","written and verbal communication"],"x-skills-preferred":["AI/ML ecosystem","threat actor profiling","generative AI products","AI policy","IP enforcement","competitive intelligence","AI governance"],"datePosted":"2026-04-18T15:54:40.142Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust & safety, abuse enforcement, fraud investigation, policy, data analysis, structured investigations, open-source intelligence, written and verbal communication, AI/ML ecosystem, threat actor profiling, generative AI products, AI policy, IP enforcement, competitive intelligence, AI governance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_051843ef-f93"},"title":"Vendor and Contract Manager, Safeguards","description":"<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships. This includes identifying and selecting vendors, contract negotiation, onboarding, ongoing performance management, and renewal.</p>\n<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>\n<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>\n<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>\n<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>\n<p>Responsibilities:</p>\n<p>Vendor Selection &amp; Onboarding - Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</p>\n<p>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</p>\n<p>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</p>\n<p>Contract &amp; Budget Management - Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</p>\n<p>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</p>\n<p>Handle invoicing, payment, and renewal processes with partners</p>\n<p>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</p>\n<p>Ongoing Vendor &amp; Partner Management - Manage vendor and researcher access to models and products during testing phases and trials</p>\n<p>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</p>\n<p>Report on vendor performance, spend, and contract status to Safeguards leadership</p>\n<p>You may be a good fit if you have:</p>\n<p>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</p>\n<p>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</p>\n<p>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</p>\n<p>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</p>\n<p>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</p>\n<p>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</p>\n<p>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</p>\n<p>Nice to have:</p>\n<p>Experience in AI safety or AI-adjacent vendor ecosystems</p>\n<p>Familiarity with procurement tools such as Ironclad or Zip</p>\n<p>Annual compensation range for this role is $245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_051843ef-f93","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156596008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["vendor management","procurement","contract operations","risk management","fraud prevention","compliance","trust and safety","AI safety","account abuse prevention","platform integrity","verification vendors","threat intelligence providers","content screening tools"],"x-skills-preferred":["Ironclad","Zip","research collaboration","civil society consultation","model red-teaming"],"datePosted":"2026-04-18T15:54:23.403Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, verification vendors, threat intelligence providers, content screening tools, Ironclad, Zip, research collaboration, civil society consultation, model red-teaming","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da06ef8d-890"},"title":"Vendor and Contract Manager, Safeguards","description":"<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships , from identifying and selecting vendors through contract negotiation, onboarding, ongoing performance management, and renewal.</p>\n<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>\n<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>\n<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>\n<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Vendor Selection &amp; Onboarding: Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</li>\n<li>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</li>\n<li>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</li>\n<li>Contract &amp; Budget Management: Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</li>\n<li>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</li>\n<li>Handle invoicing, payment, and renewal processes with partners</li>\n<li>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</li>\n<li>Ongoing Vendor &amp; Partner Management: Manage vendor and researcher access to models and products during testing phases and trials</li>\n<li>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</li>\n<li>Report on vendor performance, spend, and contract status to Safeguards leadership</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</li>\n<li>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</li>\n<li>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</li>\n<li>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</li>\n<li>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</li>\n<li>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</li>\n<li>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience in AI safety or AI-adjacent vendor ecosystems</li>\n<li>Familiarity with procurement tools such as Ironclad or Zip</li>\n</ul>\n<p><strong>Logistics:</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da06ef8d-890","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156596008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["vendor management","procurement","contract operations","risk management","fraud prevention","compliance","trust and safety","AI safety","account abuse prevention","platform integrity","cross-functional collaboration","ambiguity tolerance","fast-paced environments"],"x-skills-preferred":["AI safety vendor ecosystems","procurement tools","Ironclad","Zip"],"datePosted":"2026-04-18T15:53:59.839Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, cross-functional collaboration, ambiguity tolerance, fast-paced environments, AI safety vendor ecosystems, procurement tools, Ironclad, Zip","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a922c6ae-3c1"},"title":"Technical CBRN-E  Threat Investigator","description":"<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>\n<p>You will work at the intersection of AI safety and CBRN security, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against threat actors who may attempt to leverage our AI technology for developing weapons, synthesizing dangerous compounds, or creating biological harm.</p>\n<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>\n</ul>\n<ul>\n<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>\n</ul>\n<ul>\n<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<ul>\n<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>\n</ul>\n<p>Strong candidates may also have</p>\n<ul>\n<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>\n</ul>\n<ul>\n<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>\n</ul>\n<ul>\n<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a922c6ae-3c1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066997008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000-$290,000 USD","x-skills-required":["SQL","Python","biosecurity","chemical defense","biological weapons non-proliferation","dual-use research of concern (DURC)","synthetic biology","threat actor profiling","threat intelligence frameworks","large language models","AI technology misuse"],"x-skills-preferred":["advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field","real-world experience countering weapons of mass destruction or other high-risk asymmetric threats","experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information","background in AI safety, machine learning security, or technology abuse investigation","familiarity with synthetic biology, biotechnology, or dual-use research","experience building and scaling threat detection systems or abuse monitoring programs","active Top Secret security clearance"],"datePosted":"2026-04-18T15:53:57.472Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology misuse, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f53caced-334"},"title":"Software Engineer, Cloud Inference Safeguards","description":"<p>We are seeking a Software Engineer to build and operate the safety, oversight, and intervention mechanisms that protect Claude on third-party cloud service provider (CSP) platforms.</p>\n<p>As the engineer responsible for Safeguards on those surfaces, you will ensure that every request served through our CSP partners is monitored for misuse, enforced against policy, and compliant with the data residency and privacy commitments that enterprise CSP customers expect.</p>\n<p>You will sit at the seam between the Safeguards organisation and the Cloud Inference team: taking classifiers, detection signals, and enforcement policies developed by Safeguards and making them run reliably inside a CSP partner&#39;s infrastructure at serving-path latency and scale.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build, deploy and operate real-time safeguards infrastructure,classifiers, rate limits, enforcement actions, and intervention hooks,embedded directly in the third-party CSP inference serving path</li>\n</ul>\n<ul>\n<li>Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring we can detect abuse and monitor model behaviour while honouring regionalisation boundaries and enterprise contractual commitments</li>\n</ul>\n<ul>\n<li>Develop telemetry, logging, and evaluation pipelines that give Safeguards, Policy, and T&amp;S operational teams situational awareness over CSP traffic and close the visibility gap between third-party and first-party serving</li>\n</ul>\n<ul>\n<li>Dive into the CSP serving stack to identify the lowest-impact points to gather signals or introduce interventions without degrading latency, stability, or overall architecture</li>\n</ul>\n<ul>\n<li>Hold a high operational bar: own on-call, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep Claude safe</li>\n</ul>\n<ul>\n<li>Work closely with Safeguards research, Policy &amp; Enforcement, the Cloud Inference team, and CSP partner contacts to turn detection research and policy decisions into production enforcement that works inside a partner&#39;s cloud.</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have a Bachelor&#39;s degree in Computer Science, Software Engineering, or comparable experience</li>\n</ul>\n<ul>\n<li>Have 4–10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust &amp; safety, anti-abuse, fraud, or integrity systems</li>\n</ul>\n<ul>\n<li>Are proficient in Python and comfortable working across the stack,from request-path services to data pipelines to internal tooling</li>\n</ul>\n<ul>\n<li>Think adversarially: you can see a system from a bad actor&#39;s perspective, anticipate how they will respond to countermeasures, and design defences in depth rather than single points of enforcement</li>\n</ul>\n<ul>\n<li>Have experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets</li>\n</ul>\n<ul>\n<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>\n</ul>\n<ul>\n<li>Have strong communication skills and can explain complex technical and risk tradeoffs to non-technical stakeholders across Policy, Legal, and partner organisations</li>\n</ul>\n<ul>\n<li>Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI space</li>\n</ul>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>Building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scale</li>\n</ul>\n<ul>\n<li>Machine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in production</li>\n</ul>\n<ul>\n<li>Major cloud platform internals,IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring,or experience shipping software that runs inside a partner&#39;s cloud rather than your own</li>\n</ul>\n<ul>\n<li>Data residency, privacy engineering, or compliance-constrained architectures, particularly where telemetry has to stay within regional or contractual boundaries</li>\n</ul>\n<ul>\n<li>Working closely with operational and human-review teams to build custom internal tooling, admin UX, and alerting</li>\n</ul>\n<ul>\n<li>Adversarial mindset: has shipped defences against motivated attackers before, knows what it feels like when they adapt, and can sprint to close a gap before it becomes an incident</li>\n</ul>\n<ul>\n<li>Comfortable operating at the intersection of platform/infra engineering and trust &amp; safety,neither a pure infra engineer nor a pure T&amp;S engineer, but someone who can credibly do both</li>\n</ul>\n<ul>\n<li>Has shipped software that runs inside someone else&#39;s infrastructure (partner cloud, embedded deployment, or similar) and knows how to get things done when you don&#39;t control the whole stack</li>\n</ul>\n<ul>\n<li>Senior enough to own a cross-team seam independently, drive consensus across orgs, and make latency/safety tradeoff calls without escalation</li>\n</ul>\n<ul>\n<li>TypeScript or Rust, and agentic coding tools such as Claude Code</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $405,000-$485,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f53caced-334","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5168829008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Cloud service provider (CSP)","Data residency and privacy","Machine learning serving infrastructure","Major cloud platform internals","Data residency, privacy engineering, or compliance-constrained architectures"],"x-skills-preferred":["TypeScript","Rust","Agentic coding tools","Claude Code","Trust and safety","Anti-abuse","Fraud","Integrity systems"],"datePosted":"2026-04-18T15:53:08.973Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Cloud service provider (CSP), Data residency and privacy, Machine learning serving infrastructure, Major cloud platform internals, Data residency, privacy engineering, or compliance-constrained architectures, TypeScript, Rust, Agentic coding tools, Claude Code, Trust and safety, Anti-abuse, Fraud, Integrity systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86d4c902-c89"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86d4c902-c89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis tools","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","non-consensual intimate imagery","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem working on these harms","open-source investigations or threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:52:37.777Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d8c3966d-267"},"title":"Software Engineer, Safeguards","description":"<p>Job Description:</p>\n<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems.</p>\n<p>About the Role</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop monitoring systems to detect unwanted behaviours from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review</li>\n</ul>\n<ul>\n<li>Build abuse detection mechanisms and infrastructure</li>\n</ul>\n<ul>\n<li>Surface abuse patterns to our research teams to harden models at the training stage</li>\n</ul>\n<ul>\n<li>Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering or comparable experience</li>\n</ul>\n<ul>\n<li>5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection and mitigation</li>\n</ul>\n<ul>\n<li>Proficiency in Python and Typescript</li>\n</ul>\n<ul>\n<li>Ability to work across the stack</li>\n</ul>\n<ul>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p>Strong Candidates May Also:</p>\n<ul>\n<li>Have experience building trust and safety detection mechanisms and intervention for AI/ML systems</li>\n</ul>\n<ul>\n<li>Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs</li>\n</ul>\n<ul>\n<li>Have worked closely with operational teams to build custom internal tooling</li>\n</ul>\n<p>Logistics</p>\n<ul>\n<li>Minimum education: Bachelor&#39;s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p>How We&#39;re Different</p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come Work With Us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d8c3966d-267","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4951844008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$425,000 USD","x-skills-required":["Python","Typescript","Software Engineering","Abuse Detection","Machine Learning"],"x-skills-preferred":["Prompt Engineering","Jailbreak Attacks","Adversarial Inputs","Custom Internal Tooling"],"datePosted":"2026-04-18T15:52:30.640Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Typescript, Software Engineering, Abuse Detection, Machine Learning, Prompt Engineering, Jailbreak Attacks, Adversarial Inputs, Custom Internal Tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e95732e6-2ad"},"title":"Software Engineer, Account Abuse","description":"<p>About the role</p>\n<p>The Account Abuse team at Anthropic is tasked with ensuring the company&#39;s computing capacity is allocated fairly, minimizing resources available to bad actors and preventing them from coming back. As a software engineer on this team, you will build systems that gather and analyze signals at scale, balancing tradeoffs and coordinating closely with stakeholder teams throughout the company.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Think and respond quickly in a rapidly-changing greenfield environment</li>\n<li>Jump into other teams&#39; code to identify key points to gather signals or introduce interventions with minimal impact on their systems&#39; stability, complexity, or overall architecture</li>\n<li>Integrate with third-party data-enrichment vendors</li>\n<li>Create monitoring dashboards, alerts, and internal admin UX</li>\n<li>Work closely with data scientists to maintain situational awareness of current usage patterns and trends, and with the Policy &amp; Enforcement team to maximize the impact of their human-review availability</li>\n<li>Build robust and reliable multi-layered defenses</li>\n<li>Lead root cause analyses and deep-dive investigations into account activity to identify abuse patterns, uncover emerging attack vectors, and inform both immediate enforcement actions and longer-term systemic defenses</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection</li>\n<li>Proficiency in Python, SQL, and data analysis tools</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p>Preferred qualifications</p>\n<ul>\n<li>Experience building trust and safety mechanisms for and using AI/ML systems, such as fraud-detection models or security monitoring tools or the infrastructure to support these systems at scale</li>\n<li>Experience working closely with operational teams to build custom internal tooling</li>\n</ul>\n<p>Annual compensation range</p>\n<p>$320,000-$405,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e95732e6-2ad","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5123039008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","SQL","data analysis tools","software engineering","integrity","spam","fraud","abuse detection"],"x-skills-preferred":["trust and safety mechanisms","AI/ML systems","fraud-detection models","security monitoring tools","infrastructure"],"datePosted":"2026-04-18T15:52:06.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, data analysis tools, software engineering, integrity, spam, fraud, abuse detection, trust and safety mechanisms, AI/ML systems, fraud-detection models, security monitoring tools, infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1410a549-44e"},"title":"Director of Machine Learning, Safety & Mods","description":"<p>We&#39;re looking for a Director of Machine Learning to lead Reddit&#39;s efforts in building industry-leading ML systems that keep our platform safe and foster healthy online communities.</p>\n<p>This leader will drive the strategy, development, and deployment of machine learning models that detect and prevent harmful content and behavior at scale.</p>\n<p>In this role, you will own the roadmap for Safety and moderation ML, lead a team of applied scientists and engineers, and partner cross-functionally across Product, Engineering, Safety operations, Trust &amp; Community, and AI/ML Platform to innovate on real-time detection, automation, and user protection systems.</p>\n<p>You will leverage modern ML , including fine-tuned LLMs , to ensure Reddit remains a safe, welcoming, and positive environment for our global user base.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Set the vision and strategy for applying ML to Trust &amp; Safety, ensuring scalable, proactive protection against evolving abuse patterns.</li>\n</ul>\n<ul>\n<li>Lead and grow a high-performing Safety ML organization, including applied research, model development, productionization, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Develop and deploy cutting-edge Safety ML systems (including fine-tuned LLMs and transformer models) that outperform state-of-the-art solutions in quality, latency, and efficiency.</li>\n</ul>\n<ul>\n<li>Partner with Trust &amp; Safety, Product, Moderation, and AI/ML Platform teams to identify safety risks, emerging harm vectors, and ML opportunities that improve detection, enforcement, and user experience.</li>\n</ul>\n<ul>\n<li>Drive successful experimentation, evaluation, and model lifecycle management, ensuring high precision, recall, explainability, and policy alignment.</li>\n</ul>\n<ul>\n<li>Champion ethical and responsible AI practices in all Safety ML solutions.</li>\n</ul>\n<ul>\n<li>Track performance through metrics, research-based iteration, and alignment with Reddit’s safety policies and regulatory standards.</li>\n</ul>\n<ul>\n<li>Represent Safety ML leadership internally and externally , including conferences, publications, industry groups, and cross-company collaboration initiatives.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>10+ years of experience in Machine Learning, AI, or applied research, with a strong background in Trust &amp; Safety, abuse prevention, detection, or content integrity.</li>\n</ul>\n<ul>\n<li>5+ years of experience leading multi-disciplinary ML teams (applied science, engineering, analytics) in a high-growth or high-impact environment.</li>\n</ul>\n<ul>\n<li>Proven track record of shipping ML systems at scale in production, ideally including transformer-based models and LLM fine-tuning.</li>\n</ul>\n<ul>\n<li>Depth in NLP, content understanding, detection systems, supervised and weak-supervision techniques.</li>\n</ul>\n<ul>\n<li>Strong cross-functional leadership skills, with ability to influence executives and foster alignment across Safety, Product, and Engineering.</li>\n</ul>\n<ul>\n<li>Thought leadership in responsible AI, safety ML research, or safety measurement frameworks.</li>\n</ul>\n<p>Bonus points if you have:</p>\n<ul>\n<li>Experience building or operating real-time abuse detection and automated moderation systems in a complex user-generated content ecosystem.</li>\n</ul>\n<ul>\n<li>Prior work in consumer-facing tech, social platforms, or large-scale community-driven products.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>\n</ul>\n<ul>\n<li>401k with Employer Match</li>\n</ul>\n<ul>\n<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>\n</ul>\n<ul>\n<li>Family Planning Support</li>\n</ul>\n<ul>\n<li>Gender-Affirming Care</li>\n</ul>\n<ul>\n<li>Mental Health &amp; Coaching Benefits</li>\n</ul>\n<ul>\n<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>\n</ul>\n<ul>\n<li>Generous Paid Parental Leave</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1410a549-44e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7430544","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$265,800-$365,100 USD","x-skills-required":["Machine Learning","AI","Applied Research","Trust & Safety","Abuse Prevention","Detection","Content Integrity","NLP","Content Understanding","Detection Systems","Supervised and Weak-Supervision Techniques"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:09.971Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, AI, Applied Research, Trust & Safety, Abuse Prevention, Detection, Content Integrity, NLP, Content Understanding, Detection Systems, Supervised and Weak-Supervision Techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":265800,"maxValue":365100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_be4dd922-943"},"title":"Safeguards Enforcement Lead, Frontier Abuse Enforcement","description":"<p>About the role</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the end-to-end enforcement strategy against unauthorized frontier model abuse , from detection signal development through enforcement action and post-enforcement measurement</li>\n<li>Operationalize detection and review pipelines , translating leads from case investigations and detection outputs into structured review workflows that scale across surfaces</li>\n<li>Drive enforcement actions for high-priority actors , synthesizing signals from across intelligence, detection, and review into enforcement packages suitable for formal escalation</li>\n<li>Partner with Legal and Policy to assess case strength, characterize ToS and IP violations, and support enforcement escalations through formal channels</li>\n<li>Close the loop between enforcement outcomes and upstream improvements , channeling review results and enforcement findings back to policy updates and detection refinements</li>\n<li>Develop and maintain a dynamic enforcement framework that accounts for the complexity of cross-surface enforcement , including varied escalation paths, partner coordination, and enforcement consistency across surfaces</li>\n<li>Collaborate with Threat Intelligence, Research, Engineering, and Policy partners to ensure detection coverage keeps pace with evolving frontier abuse tactics</li>\n<li>Maintain rigorous documentation of enforcement decisions, pipeline logic, and precedents to build institutional knowledge</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>5+ years of experience in trust &amp; safety, abuse enforcement, fraud investigation, policy, or a related field , with demonstrated ownership of complex, high-stakes enforcement programs</li>\n<li>Track record of building detection and enforcement approaches for novel or emerging abuse vectors where established playbooks don&#39;t exist</li>\n<li>Experience supporting or directly contributing to formal enforcement actions , including case documentation, evidence packaging, and escalation coordination</li>\n<li>Strong data analysis skills , comfortable navigating complex, multi-table datasets to surface behavioral patterns and support investigations</li>\n<li>Experience conducting structured investigations, including open-source intelligence techniques and cross-referencing external data sources to attribute activity</li>\n<li>Demonstrated ability to translate ambiguous policy questions into defensible enforcement decisions and clear written findings</li>\n<li>Strong written and verbal communication skills , able to present complex enforcement cases clearly to stakeholders across Legal, Policy, and Engineering</li>\n</ul>\n<p>Preferred qualifications include familiarity with the AI/ML ecosystem, experience conducting threat actor profiling or open-source investigations, and experience working with generative AI products.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_be4dd922-943","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5162211008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000-$270,000 USD","x-skills-required":["trust & safety","abuse enforcement","fraud investigation","policy","data analysis","structured investigations","open-source intelligence","threat actor profiling","generative AI products"],"x-skills-preferred":["AI/ML ecosystem","open-source investigations"],"datePosted":"2026-04-18T15:46:34.003Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust & safety, abuse enforcement, fraud investigation, policy, data analysis, structured investigations, open-source intelligence, threat actor profiling, generative AI products, AI/ML ecosystem, open-source investigations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a144188-686"},"title":"Solutions Engineer, Benelux","description":"<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. As a Solutions Engineer, you will be part of the Pre-Sales Solution Engineering organisation, owning the technical sale of the Cloudflare solution portfolio. You will work closely with our customers and partners to educate, empower, and ensure their success delivering Cloudflare security, reliability, and performance solutions.</p>\n<p>Your role will be to build champions and enable technical teams alongside our Benelux sales organisation to drive pipeline and close deals. As the technical advocate inside Cloudflare, you will work closely with teams across Sales, Product, Engineering, Customer Support and our channel partners to ensure our customers succeed with Cloudflare security, reliability, and performance solutions.</p>\n<p>We are looking for someone with strong experience in pre-sales, partner and account management, and excellent verbal and written communication skills in Dutch and English, suited for both technical and executive-level engagement. You should be comfortable speaking about the Cloudflare vision with all audiences.</p>\n<p>Specifically, we are looking for you to:</p>\n<ul>\n<li>Build and maintain long-term technical relationships with prospects, customers and ecosystem organisations across Benelux through demonstrating value, enablement, and uncovering new areas of potential revenue</li>\n<li>Drive technical solution design conversations through use case qualification and collaborative technical wins through demonstrations and proofs-of-concept</li>\n<li>Develop passionate technical champions within the technology ranks of your accounts, helping them drive sales for identified opportunities and build revenue pipeline</li>\n<li>Evangelize and represent Cloudflare through technical thought leadership and expertise</li>\n<li>Be the voice of the market internally at Cloudflare, engaging with and influencing Product and Engineering teams to meet the needs of your accounts and their customers</li>\n</ul>\n<p>You will travel requirement in the Benelux to support engagements, attend conferences and industry events, and collaborate with Cloudflare teammates.</p>\n<p>Examples of desirable skills, knowledge and experience include:</p>\n<ul>\n<li>Fluency in Dutch and English (verbal and written)</li>\n<li>Ability to communicate complex technical concepts to both technical and non-technical audiences, including C-level stakeholders</li>\n<li>Strong presentation and storytelling skills (whiteboarding, demos, executive briefings)</li>\n<li>Experience managing technical sales cycles end-to-end</li>\n<li>Ability to articulate business value and ROI of technical solutions, not just features</li>\n<li>Experience working within an integrated account team (alongside Account Executives, Customer Success, BDRs, and channel partners)</li>\n<li>Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting</li>\n<li>Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN</li>\n<li>Cloud networking concepts: VPCs, peering, interconnect</li>\n<li>DDoS attack types (L3/L4/L7) and mitigation strategies</li>\n<li>Web Application Firewall (WAF) rule configuration and tuning</li>\n<li>VPN concepts and their limitations relative to Zero Trust approaches</li>\n<li>API security: API Gateway, rate limiting, schema validation, abuse prevention</li>\n<li>Bot management concepts and detection techniques</li>\n<li>SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform)</li>\n<li>Zero Trust Network Access (ZTNA) vs. traditional VPN architecture</li>\n<li>HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics</li>\n<li>Detailed understanding of the flow from user to application, including hybrid cloud architectures</li>\n<li>Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models)</li>\n<li>Familiarity with Infrastructure-as-Code concepts (e.g. Terraform)</li>\n<li>Cloudflare Workers and the edge compute model (JavaScript/TypeScript)</li>\n<li>Familiarity with related primitives: KV, Object storage, serverless compute</li>\n<li>Familiarity with the competitive landscape across Cloudflare&#39;s product areas</li>\n<li>Understanding of why customers move from on-premises appliances to cloud-delivered security</li>\n<li>Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare</li>\n</ul>\n<p>We value intellectual curiosity, adaptability, and a collaborative spirit. On the Solutions Engineering team, you will find an environment where everyone brings different strengths and jumps in to help each other. If you are passionate about technology and look forward to helping customers and ecosystem organisations realise the full promise of Cloudflare, we&#39;d love to hear from you.</p>\n<p>What makes Cloudflare special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a144188-686","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7742347","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting","Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN","Cloud networking concepts: VPCs, peering, interconnect","DDoS attack types (L3/L4/L7) and mitigation strategies","Web Application Firewall (WAF) rule configuration and tuning","VPN concepts and their limitations relative to Zero Trust approaches","API security: API Gateway, rate limiting, schema validation, abuse prevention","Bot management concepts and detection techniques","SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform)","Zero Trust Network Access (ZTNA) vs. traditional VPN architecture","HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics","Detailed understanding of the flow from user to application, including hybrid cloud architectures","Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models)","Familiarity with Infrastructure-as-Code concepts (e.g. Terraform)","Cloudflare Workers and the edge compute model (JavaScript/TypeScript)","Familiarity with related primitives: KV, Object storage, serverless compute","Familiarity with the competitive landscape across Cloudflare's product areas","Understanding of why customers move from on-premises appliances to cloud-delivered security","Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:26.177Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Networking technologies including TCP/IP, UDP, DNS (authoritative and recursive, DNSSEC), IPv4/IPv6, BGP routing, Autonomous Systems, subnetting, Tunneling and connectivity: GRE, IPsec, MPLS, SDWAN, Cloud networking concepts: VPCs, peering, interconnect, DDoS attack types (L3/L4/L7) and mitigation strategies, Web Application Firewall (WAF) rule configuration and tuning, VPN concepts and their limitations relative to Zero Trust approaches, API security: API Gateway, rate limiting, schema validation, abuse prevention, Bot management concepts and detection techniques, SASE concepts and Zero Trust Networking architectures (ZTNA, CASB, SWG, DLP, RBI as integrated platform), Zero Trust Network Access (ZTNA) vs. traditional VPN architecture, HTTP technologies and reverse proxy architecture: WAF, CDN, caching mechanics, Detailed understanding of the flow from user to application, including hybrid cloud architectures, Working knowledge of major cloud platforms: AWS, Azure, GCP (architecture patterns, native security tooling, VPC/peering models), Familiarity with Infrastructure-as-Code concepts (e.g. Terraform), Cloudflare Workers and the edge compute model (JavaScript/TypeScript), Familiarity with related primitives: KV, Object storage, serverless compute, Familiarity with the competitive landscape across Cloudflare's product areas, Understanding of why customers move from on-premises appliances to cloud-delivered security, Awareness of relevant industry verticals: Financial Services, eCommerce, Gaming, Media, SaaS, Healthcare"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e03253e3-c7f"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p><strong>Compensation:</strong></p>\n<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e03253e3-c7f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis","detection and review workflows","sensitive content","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem","open-source investigations","threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:45:00.507Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_24909215-1c8"},"title":"Senior Risk Functional Specialist","description":"<p>Job Title: Senior Risk Functional Specialist</p>\n<p>Location: United States</p>\n<p>Department: Trust and Safety</p>\n<p>Job Description:</p>\n<p>Airbnb was born in 2007 when two hosts welcomed three guests to their San Francisco home, and has since grown to over 5 million hosts who have welcomed over 2 billion guest arrivals in almost every country across the globe.</p>\n<p>Every day, hosts offer unique stays and experiences that make it possible for guests to connect with communities in a more authentic way.</p>\n<p>The Community You Will Join:</p>\n<p>Payment Risk Operations is at the heart of what makes Airbnb a place where anyone can belong anywhere. We&#39;re the guardians of trust in our global marketplace, working tirelessly to create safe and authentic experiences for millions of hosts and guests worldwide.</p>\n<p>Our mission is simple yet powerful: protect our community while preserving the magic of travel.</p>\n<p>We&#39;re passionate about user satisfaction and dedicated to crafting thoughtful policies, intelligent rules, and innovative systems that elevate the quality of every interaction on our platform.</p>\n<p>The Difference You Will Make:</p>\n<p>As a Senior Risk Functional Specialist, you will be responsible for overseeing the operational procedures and escalations related to stored value payment products (e.g. gift cards) and the review of performance metrics for business and vendor feedback.</p>\n<p>You will work closely with operational stakeholders and cross-functional partners to enact change to improve our products and processes.</p>\n<p>You will consistently demonstrate excellent decision-making skills as you solve a wide range of complex problems.</p>\n<p>You will be expected to understand and apply Airbnb core values in all of your work.</p>\n<p>Your impact will span:</p>\n<ul>\n<li>Transaction security across all Airbnb products and services</li>\n</ul>\n<ul>\n<li>Risk policy support to build trust in our platform</li>\n</ul>\n<ul>\n<li>User verification and onboarding experiences that balance security with seamless user journeys</li>\n</ul>\n<ul>\n<li>Fraud detection systems that evolve as quickly as the threats they combat</li>\n</ul>\n<ul>\n<li>Operational excellence that keeps our community safe while maintaining the hospitality that defines Airbnb</li>\n</ul>\n<p>A Typical Day:</p>\n<ul>\n<li>Protect our community by reviewing and making exceptional decisions for platform exemptions to maintain trust and safety across Airbnb</li>\n</ul>\n<ul>\n<li>Own incident resolution of risky pay-in procedures from escalation to closure, ensuring swift and thorough case management that protects our hosts and guests</li>\n</ul>\n<ul>\n<li>Navigate complex operational issues by partnering with Legal, Public Affairs, and other teams to respond to regulatory inquiries related to fraud and criminal activities</li>\n</ul>\n<ul>\n<li>Tell the story through data by drafting business requirements and concept briefs that highlight key operational needs for platform development</li>\n</ul>\n<p>Your Expertise:</p>\n<ul>\n<li>Minimum of 3+ years professional experience in fraud, abuse, or cybercrime investigations</li>\n</ul>\n<ul>\n<li>Minimum of 1+ years professional experience related to stored value risk (e.g. gift cards, coupons, credits, incentives)</li>\n</ul>\n<ul>\n<li>Detail-oriented, highly analytical, and strong project management skills</li>\n</ul>\n<ul>\n<li>Ability to understand opposing points of view on highly complex issues</li>\n</ul>\n<ul>\n<li>Strong ability to gather information from various internal sources</li>\n</ul>\n<ul>\n<li>Capacity to draw actionable insights from dashboards and reports</li>\n</ul>\n<ul>\n<li>Risk policy creation and/or administration experience</li>\n</ul>\n<ul>\n<li>Basic SQL</li>\n</ul>\n<ul>\n<li>Experience with data visualization and business intelligence tools (e.g. Tableau, Superset)</li>\n</ul>\n<ul>\n<li>CFE or equivalent certifications</li>\n</ul>\n<p>Your Location:</p>\n<p>This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager.</p>\n<p>While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity.</p>\n<p>Click here for the up-to-date list of excluded states.</p>\n<p>This list is continuously evolving, so please check back with us if the state you live in is on the exclusion list</p>\n<p>If your position is employed by another Airbnb entity, your recruiter will inform you what states you are eligible to work from.</p>\n<p>Our Commitment To Inclusion &amp; Belonging:</p>\n<p>Airbnb is committed to working with the broadest talent pool possible.</p>\n<p>We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions.</p>\n<p>All qualified individuals are encouraged to apply.</p>\n<p>We strive to also provide a disability inclusive application and interview process.</p>\n<p>If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com.</p>\n<p>Please include your full name, the role you’re applying for and the accommodation necessary to assist you with the recruiting process.</p>\n<p>We ask that you only reach out to us if you are a candidate whose disability prevents you from being able to complete our online application.</p>\n<p>How We&#39;ll Take Care of You:</p>\n<p>Our job titles may span more than one career level.</p>\n<p>The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands.</p>\n<p>The base pay range is subject to change and may be modified in the future.</p>\n<p>This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Pay Range $82,000-$96,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_24909215-1c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7767312","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$82,000-$96,000 USD","x-skills-required":["fraud","abuse","cybercrime investigations","stored value risk","gift cards","coupons","credits","incentives","detail-oriented","analytical","project management","SQL","data visualization","business intelligence"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:14.438Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"fraud, abuse, cybercrime investigations, stored value risk, gift cards, coupons, credits, incentives, detail-oriented, analytical, project management, SQL, data visualization, business intelligence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":82000,"maxValue":96000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f578503a-af9"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>\n<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>\n<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>\n<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>\n<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f578503a-af9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097904007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$43.75 - $62.50 USD hourly","x-skills-required":["Improving Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)","Online safety and reducing harm","Ethical reasoning and risk assessment","Data analysis"],"x-skills-preferred":["Experience working in a Trust and Safety for a social media company","Collaborating with child safety organizations","Red-teaming and adversarial testing of Large Language Models"],"datePosted":"2026-04-18T15:25:26.718Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f818897-404"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Process appeals, audit automations, and properly label use cases in the system.</li>\n<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>\n<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret and apply xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>\n<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>\n<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f818897-404","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097907007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM)","Child Sexual Exploitation (CSE)","Online safety","Risk assessment","Ethical reasoning","Data analysis","Automation tools","Social media","Generative AI"],"x-skills-preferred":["Red-teaming","Adversarial testing","Trust and Safety","Child safety organizations","Specialized detection tools","Classifier development"],"datePosted":"2026-04-18T15:25:17.446Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ed877cf7-715"},"title":"Member of Technical Staff - X Money, Fraud and Payments","description":"<p>We&#39;re looking for an exceptional Software Engineer to focus on Fraud Engineering for a new payments platform serving 600 million+ monthly users. This high-priority role is responsible for protecting users and the platform from fraud, abuse, and risk. You&#39;ll play a key role in designing and implementing systems to detect, prevent, and mitigate fraud in real time,at scale.</p>\n<p>Your work will be at the intersection of security, distributed systems, and product engineering, helping build trusted payments infrastructure from the ground up.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement fraud detection and prevention systems that operate at global scale and low latency</li>\n<li>Develop risk scoring engines, anomaly detection pipelines, and real-time enforcement mechanisms</li>\n<li>Collaborate with product, compliance, X Money, Fraud Prevention, and infrastructure teams to ensure a secure and seamless user experience</li>\n<li>Monitor and analyze fraud trends, and proactively respond to new attack vectors</li>\n<li>Define engineering standards around observability, reliability, and rapid response in fraud-related systems</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>7+ years of backend or systems engineering, with exposure to fraud, risk, or abuse prevention systems preferred</li>\n<li>Skilled in distributed systems: You&#39;ve built resilient, high-throughput systems that operate under real-time constraints</li>\n<li>Security-conscious: You understand threat models, data sensitivity, and defense-in-depth principles</li>\n<li>Analytical and pragmatic: You value simple, high-leverage solutions and adapt quickly to evolving challenges</li>\n<li>Builder mentality: You&#39;re excited by zero-to-one problems and proven ability to thrive in fast-paced environments. You are willing to work hard.</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Experience with real-time anomaly detection, machine learning for fraud, or rule-based risk systems</li>\n<li>Familiarity with AML/KYC regulations, chargeback flows, or identity verification systems</li>\n<li>Experience in fintech, trust &amp; safety, or adversarial system design</li>\n<li>Comfortable working in a zero-to-one environment with rapid iteration</li>\n<li>Experience with: Golang, Postgres, Kafka, Memcached</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ed877cf7-715","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://xai.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4758524007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["backend or systems engineering","fraud, risk, or abuse prevention systems","distributed systems","security-conscious","analytical and pragmatic","builder mentality","Golang","Postgres","Kafka","Memcached"],"x-skills-preferred":["real-time anomaly detection","machine learning for fraud","rule-based risk systems","AML/KYC regulations","chargeback flows","identity verification systems","fintech","trust & safety","adversarial system design"],"datePosted":"2026-04-18T15:24:12.933Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend or systems engineering, fraud, risk, or abuse prevention systems, distributed systems, security-conscious, analytical and pragmatic, builder mentality, Golang, Postgres, Kafka, Memcached, real-time anomaly detection, machine learning for fraud, rule-based risk systems, AML/KYC regulations, chargeback flows, identity verification systems, fintech, trust & safety, adversarial system design","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1811a69-c2f"},"title":"Manager, Safety Operations","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Manager, Safety Operations to oversee the processing of appeals and ensure proper labeling of use cases in the system.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Guide the team&#39;s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Ensure the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.</li>\n<li>Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.</li>\n<li>Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret, apply, and train teams on xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.</li>\n<li>Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.</li>\n<li>Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1811a69-c2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090695007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Leadership and people management experience in AI-driven operations","Expertise in improving Large Language Models (LLMs)","Proven experience in online safety and reducing harm","Ability to interpret, apply, and train teams on xAI safety policies","Proficiency in analyzing complex scenarios and operational metrics","Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills","Quality assurance: Ability to hold the team to our high standard for quality work","Commitment to continuous improvement of processes, people, and operations","Expertise in data analysis to identify emerging abuse vectors"],"x-skills-preferred":["Experience managing teams in Trust and Safety for a social media company","Expertise in leading red-teaming and adversarial testing of Large Language Models"],"datePosted":"2026-04-18T15:23:50.832Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Leadership and people management experience in AI-driven operations, Expertise in improving Large Language Models (LLMs), Proven experience in online safety and reducing harm, Ability to interpret, apply, and train teams on xAI safety policies, Proficiency in analyzing complex scenarios and operational metrics, Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills, Quality assurance: Ability to hold the team to our high standard for quality work, Commitment to continuous improvement of processes, people, and operations, Expertise in data analysis to identify emerging abuse vectors, Experience managing teams in Trust and Safety for a social media company, Expertise in leading red-teaming and adversarial testing of Large Language Models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_183800c4-b3d"},"title":"Researcher, Frontier Cybersecurity Risks","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the team</strong></p>\n<p>The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.</p>\n<p>The Preparedness team is an important part of the [Safety Systems](https://openai.com/safety/safety-systems) org at OpenAI, and is guided by OpenAI’s [Preparedness Framework](https://openai.com/index/updating-our-preparedness-framework/).</p>\n<p>Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.</p>\n<p>The mission of the Preparedness team is to:</p>\n<ol>\n<li>Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic</li>\n</ol>\n<ol>\n<li>Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems</li>\n</ol>\n<p><strong>About the role</strong></p>\n<p>Models are becoming increasingly capable—moving from tools that assist humans to agents that can plan, execute, and adapt in the real world. As we push toward AGI, cybersecurity becomes one of the most important and urgent frontiers: the same systems that can accelerate productivity can also accelerate exploitation.</p>\n<p>As a Researcher for cybersecurity risks, you will help design and implement an end-to-end mitigation stack to reduce severe cyber misuse across OpenAI’s products. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as products, model capabilities, and attacker behaviors evolve.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership.</li>\n</ul>\n<ul>\n<li>Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities.</li>\n</ul>\n<ul>\n<li>Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions.</li>\n</ul>\n<ul>\n<li>Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios.</li>\n</ul>\n<ul>\n<li>Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.</li>\n</ul>\n<ul>\n<li>Bring demonstrated experience in deep learning and transformer models.</li>\n</ul>\n<ul>\n<li>Are proficient with frameworks such as PyTorch or TensorFlow.</li>\n</ul>\n<ul>\n<li>Possess a strong foundation in data structures, algorithms, and software engineering principles.</li>\n</ul>\n<ul>\n<li>Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.</li>\n</ul>\n<ul>\n<li>Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering.</li>\n</ul>\n<ul>\n<li>Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale.</li>\n</ul>\n<ul>\n<li>(Nice to have) Bring background knowledge in cybersecurity or adjacent fields.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_183800c4-b3d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/97a7eeae-9625-4d00-874f-e50131f98369","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"Estimated Base Salary $295K – $445K","x-skills-required":["Deep learning","Transformer models","PyTorch","TensorFlow","Data structures","Algorithms","Software engineering principles","Large language models","Abuse prevention","Detection","Enforcement"],"x-skills-preferred":["Cybersecurity","Adjacent fields"],"datePosted":"2026-03-08T22:17:00.637Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep learning, Transformer models, PyTorch, TensorFlow, Data structures, Algorithms, Software engineering principles, Large language models, Abuse prevention, Detection, Enforcement, Cybersecurity, Adjacent fields","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ef383eb-d73"},"title":"Abuse Investigator (CBRN)","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>\n<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting misuse of our platform or services. Specifically, you will focus on cases where users attempt to use our platform in connection with prohibited activities such as developing or delivering biological and/or chemical threats to harm people, critical resources/infrastructure, or the environment. OpenAI has strict prohibitions and policies in this area, and you will detect, disrupt, and enforce on actors who violate our policies.</p>\n<p>This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment.</p>\n<p>You will respond to time-sensitive escalations and will be expected to present your investigative work, both in writing and verbally, to key stakeholders across government, industry, and civil society, when required. You will also help inform the company’s evolving threat response and integrity monitoring and mitigation stack, while working closely on individual cases and enforcement assessments.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Detect, investigate, and disrupt the attempted misuse of OpenAI products for the development or dissemination of biological threats, including dual-use misuse and emerging biothreat vectors. You will also be expected to work across related domains (e.g., chemical threats).</li>\n</ul>\n<ul>\n<li>Partner closely with teams across Policy, Legal, Integrity, Global Affairs, and Security to conduct robust investigations, including cross-internet and open-source research to trace and understand abuse and ensure OpenAI’s mitigations address evolving needs in the space.</li>\n</ul>\n<ul>\n<li>Develop abuse signals and tracking strategies to proactively detect users attempting dual-use or biohazard-related misuse of our platform and review content for enforcement decisions.</li>\n</ul>\n<ul>\n<li>Communicate findings from your investigations with internal stakeholders and leadership and, at times, external partners including regulatory or scientific organizations.</li>\n</ul>\n<ul>\n<li>Develop a categorical understanding of our product surfaces in the biosecurity space, and work with teams to improve data visibility and internal tooling.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have industry-leading experience in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), or related biodefense fields,</li>\n</ul>\n<ul>\n<li>Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military and/or tech company</li>\n</ul>\n<ul>\n<li>Have demonstrated experience in risk-mitigation (e.g., adversarial thinking and record of success in threat mitigation)</li>\n</ul>\n<ul>\n<li>Have worked on investigations related to biological threat actors, malicious dual-use exploitation, or responsible innovation in synthetic biology or bioengineering</li>\n</ul>\n<ul>\n<li>Have at least 5+ years of experience tracking misuse and/or abuse in biosecurity or life sciences domains, or equivalent education in these domains</li>\n</ul>\n<ul>\n<li>Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems</li>\n</ul>\n<ul>\n<li>Experience in presenting analytical work in public or policy settings</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protec</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ef383eb-d73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/5d618f84-fcce-496c-bfe9-995bd9ff9065","x-work-arrangement":"Remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$230.4K – $425K","x-skills-required":["biosecurity","biological weapons non-proliferation","dual-use research of concern (DURC)","biodefense","SQL","Python","risk-mitigation","adversarial thinking","threat mitigation","biological threat actors","malicious dual-use exploitation","responsible innovation in synthetic biology or bioengineering","misuse and/or abuse in biosecurity or life sciences domains","innovative detection solutions","open-ended research","analytical work in public or policy settings","scaling and automating processes","language models"],"x-skills-preferred":[],"datePosted":"2026-03-08T22:14:53.880Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; San Francisco; Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, SQL, Python, risk-mitigation, adversarial thinking, threat mitigation, biological threat actors, malicious dual-use exploitation, responsible innovation in synthetic biology or bioengineering, misuse and/or abuse in biosecurity or life sciences domains, innovative detection solutions, open-ended research, analytical work in public or policy settings, scaling and automating processes, language models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230400,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18528dac-ae1"},"title":"Threat Collections Engineer","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a Threat Collections Engineer to join our Threat Intelligence team. In this role, you will build the infrastructure that powers our threat discovery capabilities—integrating external data sources, developing detection systems for automated lead generation, and creating internal tooling that scales our investigators&#39; impact.</p>\n<p>This is a foundational engineering role on a small, high-impact team. You will take projects from proof-of-concept to production, work closely with investigators to understand their needs, and help scale what may become a multi-person collections function.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build automated detection systems that use disparate signals to identify abusive behaviour.</li>\n<li>Take systems from idea to proof-of-concept to production-grade with appropriate monitoring, documentation, and maintenance processes</li>\n<li>Develop and maintain YARA rule infrastructure, including tools for writing, validating, and testing rules against real data</li>\n<li>Create integrations with external threat intelligence platforms (e.g. VirusTotal, Censys, Urlscan) via MCP servers to enable multi-source correlation during investigations</li>\n<li>Build data pipelines that ingest intelligence from RSS feeds, CTI news sources, and partner sharing, using Claude to extract TTPs and generate targeted hunting queries</li>\n<li>Develop behavioural analytics capabilities using DBT-based frameworks and create searchable audit logging infrastructure</li>\n<li>Establish feedback loops with investigators to tune detection systems and reduce false positives</li>\n<li>Scrape and normalise data from external sources to feed threat detection and enrichment workflows</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have strong coding proficiency in Python and SQL for building detection logic, data pipelines, and automation</li>\n<li>Have experience with data pipeline orchestration tools (Airflow, DBT, or similar)</li>\n<li>Have familiarity with threat intelligence concepts including IOCs, YARA rules, and threat correlation techniques</li>\n<li>Have experience integrating external APIs and building data ingestion systems</li>\n<li>Can translate investigator needs and workflows into technical requirements</li>\n<li>Are comfortable building v0 systems and iterating based on user feedback</li>\n<li>Have strong communication skills for working closely with non-engineering stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience with threat intelligence sharing frameworks (e.g. MISP, STIX/TAXII)</li>\n<li>Background in cyber threat intelligence, security operations, or abuse detection</li>\n<li>Experience building MCP servers or similar tool integrations for AI systems</li>\n<li>Familiarity with web scraping and data extraction at scale</li>\n<li>Experience with behavioural analytics or anomaly detection systems</li>\n<li>Understanding of LLM capabilities and how to leverage them for automation</li>\n<li>A Top Secret Clearance</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.** Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</strong></p>\n<p><strong>Your safety matters to us.** To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics as it does with computer science.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18528dac-ae1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5074937008","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$300,000 - $320,000 USD","x-skills-required":["Python","SQL","Airflow","DBT","YARA rules","Threat intelligence","API integration","Data ingestion","Web scraping","Data extraction"],"x-skills-preferred":["MISP","STIX/TAXII","Cyber threat intelligence","Security operations","Abuse detection","LLM capabilities","Automation"],"datePosted":"2026-03-08T13:53:41.541Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Airflow, DBT, YARA rules, Threat intelligence, API integration, Data ingestion, Web scraping, Data extraction, MISP, STIX/TAXII, Cyber threat intelligence, Security operations, Abuse detection, LLM capabilities, Automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68c29e94-faa"},"title":"Technical Cyber Threat Investigator","description":"<p><strong>About the Role</strong></p>\n<p>We are looking for a Technical Cyber Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for malicious cyber operations.</p>\n<p>You will work at the intersection of AI safety and cybersecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging cyber threats in the rapidly evolving landscape of AI-enabled risks. Your work will directly protect the broader ecosystem from sophisticated threat actors who seek to leverage AI technology for harm.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for cyber operations, including influence operations, malware development, social engineering, and other adversarial activities</li>\n</ul>\n<ul>\n<li>Develop abuse signals and tracking strategies to proactively detect sophisticated threat actors across our platform</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, using open-source research, dark web monitoring, and internal data</li>\n</ul>\n<ul>\n<li>Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm at scale</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem to anticipate how AI systems could be misused, generating and publishing reports</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external threat intelligence partners, information sharing communities, and government stakeholders</li>\n</ul>\n<ul>\n<li>Work cross-functionally to build out our threat intelligence program, establishing processes, tools, and best practices</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience with large language models and understanding of how AI technology could be misused for cyber threats</li>\n</ul>\n<ul>\n<li>Have subject matter expertise in abusive user behaviour detection, such as influence operations, coordinated inauthentic behaviour, or cyber threat intelligence</li>\n</ul>\n<ul>\n<li>Have experience tracking threat actors across surface, deep, and dark web environments</li>\n</ul>\n<ul>\n<li>Can derive insights from large datasets to make key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Have experience with threat actor profiling and utilising threat intelligence frameworks (MITRE ATT&amp;CK, etc.)</li>\n</ul>\n<ul>\n<li>Have strong project management skills and ability to build processes from the ground up</li>\n</ul>\n<ul>\n<li>Possess excellent communication skills to collaborate with cross-functional teams and present to leadership</li>\n</ul>\n<p><strong>Strong candidates may also have</strong></p>\n<ul>\n<li>Experience working with government agencies or in regulated environments</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p><strong>Deadline to apply</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/career</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68c29e94-faa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066995008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000 - $290,000 USD","x-skills-required":["SQL","Python","large language models","AI technology","cyber threats","abusive user behaviour detection","threat actor profiling","threat intelligence frameworks","project management","communication skills"],"x-skills-preferred":["experience working with government agencies","background in AI safety","machine learning security","technology abuse investigation","experience building and scaling threat detection systems"],"datePosted":"2026-03-08T13:53:20.742Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, large language models, AI technology, cyber threats, abusive user behaviour detection, threat actor profiling, threat intelligence frameworks, project management, communication skills, experience working with government agencies, background in AI safety, machine learning security, technology abuse investigation, experience building and scaling threat detection systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7df914df-096"},"title":"Software Engineer, Safeguards","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are seeking a Software Engineer, Safeguards to join our team. As a Software Engineer, Safeguards, you will be responsible for developing monitoring systems to detect unwanted behaviours from our API partners and potentially taking automated enforcement actions. You will also be responsible for building abuse detection mechanisms and infrastructure, as well as surfacing abuse patterns to our research teams to harden models at the training stage.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Develop monitoring systems to detect unwanted behaviours from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review</li>\n<li>Build abuse detection mechanisms and infrastructure</li>\n<li>Surface abuse patterns to our research teams to harden models at the training stage</li>\n<li>Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection and mitigation</li>\n<li>Proficiency in Python and Typescript</li>\n<li>Ability to work across the stack</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience building trust and safety detection mechanisms and intervention for AI/ML systems</li>\n<li>Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs</li>\n<li>Have worked closely with operational teams to build custom internal tooling</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Interested in building your career at Anthropic? Get future opportunities sent straight to your email.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7df914df-096","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4951844008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $425,000 USD","x-skills-required":["Python","Typescript","Software Engineering","Abuse Detection","Machine Learning"],"x-skills-preferred":["Prompt Engineering","Jailbreak Attacks","Adversarial Inputs","Trust and Safety Detection"],"datePosted":"2026-03-08T13:51:12.817Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Typescript, Software Engineering, Abuse Detection, Machine Learning, Prompt Engineering, Jailbreak Attacks, Adversarial Inputs, Trust and Safety Detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c8d7ea06-b25"},"title":"Technical CBRN-E Threat Investigator","description":"<p><strong>About the Role</strong></p>\n<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>\n</ul>\n<ul>\n<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>\n</ul>\n<ul>\n<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<p><strong>Strong candidates may also have</strong></p>\n<ul>\n<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>\n</ul>\n<ul>\n<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>\n</ul>\n<ul>\n<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c8d7ea06-b25","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066997008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000 - $290,000USD","x-skills-required":["SQL","Python","CBRN-E threat domains","biosecurity","chemical defense","biological weapons non-proliferation","dual-use research of concern (DURC)","synthetic biology","threat actor profiling","threat intelligence frameworks","large language models","AI technology","stakeholder management"],"x-skills-preferred":["advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field","real-world experience countering weapons of mass destruction or other high-risk asymmetric threats","experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information","background in AI safety, machine learning security, or technology abuse investigation","familiarity with synthetic biology, biotechnology, or dual-use research","experience building and scaling threat detection systems or abuse monitoring programs","active Top Secret security clearance"],"datePosted":"2026-03-08T13:49:06.543Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, CBRN-E threat domains, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology, stakeholder management, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ee98770-f81"},"title":"Technical Influence Operations Threat Investigator","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behaviour, and other forms of information manipulation.</p>\n<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>\n<p>_Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays._</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behaviour, astroturfing, and narrative manipulation campaigns</li>\n</ul>\n<ul>\n<li>Conduct technical investigations using SQL, Python, and other tools to analyse large datasets, trace user behaviour patterns, and uncover coordinated networks of threat actors conducting influence operations</li>\n</ul>\n<ul>\n<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioural clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>\n</ul>\n<ul>\n<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>\n</ul>\n<ul>\n<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>\n</ul>\n<ul>\n<li>Monitor and analyse state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>\n</ul>\n<ul>\n<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>\n</ul>\n<ul>\n<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>\n</ul>\n<ul>\n<li>Forecast how advances in AI technology—including improved content generation, voice synthesis, and multimodal capabilities—will reshape the influence operations landscape and inform safety-by-design strategies</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behaviour, disinformation, or information warfare</li>\n</ul>\n<ul>\n<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>\n</ul>\n<ul>\n<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>\n</ul>\n<ul>\n<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>\n</ul>\n<ul>\n<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>\n</ul>\n<ul>\n<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>\n</ul>\n<ul>\n<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>\n</ul>\n<ul>\n<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>\n</ul>\n<ul>\n<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>\n</ul>\n<ul>\n<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>\n</ul>\n<ul>\n<li>Familiarity with social network analysis techniques and tools for mapping coordinated behaviour</li>\n</ul>\n<ul>\n<li>Background in AI safety, machine learning security, or technology abuse investigation</li>\n</ul>\n<ul>\n<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>\n</ul>\n<ul>\n<li>Active Top Secret security clearance</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration attorney to assist with the process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ee98770-f81","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5140239008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230,000 - $290,000 USD","x-skills-required":["SQL","Python","influence operations","disinformation","coordinated inauthentic behaviour","astroturfing","narrative manipulation campaigns","large language models","open-source intelligence (OSINT) methodologies","social network analysis techniques"],"x-skills-preferred":["fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic","background in intelligence analysis, information operations, or counter-disinformation","experience building and scaling threat detection systems or abuse monitoring programs"],"datePosted":"2026-03-08T13:47:58.152Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, influence operations, disinformation, coordinated inauthentic behaviour, astroturfing, narrative manipulation campaigns, large language models, open-source intelligence (OSINT) methodologies, social network analysis techniques, fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic, background in intelligence analysis, information operations, or counter-disinformation, experience building and scaling threat detection systems or abuse monitoring programs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_813dd0ec-e42"},"title":"Software Engineer, Safeguards Infrastructure","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for software engineers to help build the foundational pieces for safety, oversight and intervention mechanisms of our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Develop the foundational systems which power Safeguards, including infrastructure for data storage and management, metric and evaluation systems, and tooling for human and agentic review.</li>\n<li>Ensure the day-to-day running of Safeguards systems and hold a high operational bar which serves both safety and customers while reducing the amount of human intervention and oversight required.</li>\n<li>Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>4-10+ years of experience in a software engineering position</li>\n<li>Proficiency in Python</li>\n<li>Ability to work across the stack</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience building trust and safety, anti-spam, fraud or abuse detection and mitigation mechanisms and interventions for AI/ML systems</li>\n<li>Have experience building metrics and measurement systems or data and privacy management systems</li>\n<li>Have worked closely with operational teams to build custom internal tooling</li>\n<li>Be proficient in TypeScript or Rust</li>\n<li>Have experience with Claude Code or similar agentic coding tools</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. <strong>Guidance on Candidates&#39; AI Usage:</strong> Learn about our policy for using AI in our application process</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_813dd0ec-e42","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5074908008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£255,000 - £325,000GBP","x-skills-required":["Python","Software Engineering","Computer Science","Data Storage and Management","Metric and Evaluation Systems","Tooling for Human and Agentic Review"],"x-skills-preferred":["TypeScript","Rust","Claude Code","Agentic Coding Tools","Trust and Safety","Anti-Spam","Fraud or Abuse Detection and Mitigation Mechanisms"],"datePosted":"2026-03-08T13:47:56.482Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Software Engineering, Computer Science, Data Storage and Management, Metric and Evaluation Systems, Tooling for Human and Agentic Review, TypeScript, Rust, Claude Code, Agentic Coding Tools, Trust and Safety, Anti-Spam, Fraud or Abuse Detection and Mitigation Mechanisms","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":255000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ac007c05-251"},"title":"Staff Product Engineer, Product Platform","description":"<p><strong>About The Role</strong></p>\n<p>As a Staff Product Engineer on Replit’s Product Platform team, you’ll build the shared product systems and primitives that power Replit’s core experiences — enabling product teams to ship faster and helping users (and agents) build better software.</p>\n<p><strong>What you’ll do</strong></p>\n<ul>\n<li>Lead major cross-team platform initiatives, taking foundational systems from 0 → 1 and scaling them to support millions of users</li>\n</ul>\n<ul>\n<li>Build shared, extensible Agent primitives that Replit Agent can reuse safely and consistently (Meta Programming)</li>\n</ul>\n<ul>\n<li>Identify the highest-leverage technical bottlenecks (performance, reliability, correctness, abuse, observability), then design and ship solutions for our scale</li>\n</ul>\n<ul>\n<li>Raise the bar for engineering excellence through architecture reviews, code quality, reliability standards, and mentorship</li>\n</ul>\n<ul>\n<li>Partner across teams to improve platform adoption, ergonomics, and velocity — turning platform work into measurable outcomes</li>\n</ul>\n<p><strong>Core areas you’ll work on</strong></p>\n<ul>\n<li>Agents and Replit users depend on us to build applications (e.g. Connectors framework, Content/configuration primitives (CMS + product surfaces), Data/analytics/events + experimentation primitives)</li>\n</ul>\n<ul>\n<li>Replit Agent as a principal in third party systems. Agent can be fully used within ChatGPT and publishes straight to the iOS app store. We’ll be doing loads of that.</li>\n</ul>\n<ul>\n<li>Platform product teams rely on us to ship consistently (e.g. Identity &amp; Access platform (SSO/SCIM), Localization/i18n platform, Notifications &amp; communications platform)</li>\n</ul>\n<ul>\n<li>Core web platform infrastructure (e.g. performance &amp; page load optimization, observability and debugging workflows, caching strategy and reliability)</li>\n</ul>\n<p><strong>Required skills and experience</strong></p>\n<ul>\n<li>7+ years of professional software engineering experience</li>\n</ul>\n<ul>\n<li>Understanding of the full agentic software development stack, helping coding agents build, test and review correct code.</li>\n</ul>\n<ul>\n<li>Strong track record leading complex projects with cross-functional stakeholders</li>\n</ul>\n<ul>\n<li>Experience building and operating platform systems that other teams depend on</li>\n</ul>\n<ul>\n<li>Experience operating and scaling systems in production (reliability, performance, incidents, on-call readiness)</li>\n</ul>\n<ul>\n<li>Strong product judgment: you can balance UX, speed, correctness, and long-term maintainability</li>\n</ul>\n<ul>\n<li>Comfort working in modern web stacks such as TypeScript, React, Node.js, Postgres</li>\n</ul>\n<p><strong>Bonus points</strong></p>\n<ul>\n<li>Experience working in environments with a high engineering bar (or a fast-growing startup where you shipped fast _without_ burning out quality)</li>\n</ul>\n<ul>\n<li>Experience with platform and distributed systems patterns (queues, workflows, caching, rate limiting, async processing)</li>\n</ul>\n<ul>\n<li>Familiarity with systems like Redis, Postgres, Workflow engines (e.g. Temporal), Auth and enterprise identity (SSO, SCIM), Abuse protection and edge systems (Cloudflare), Cloud platforms (GCP), Observability (Datadog, Sentry), Localization, Experimentation and event pipelines (Statsig, Segment, analytics/event tracking)</li>\n</ul>\n<p><strong>Example Projects You’ll Work On</strong></p>\n<ul>\n<li>Connectors platform for agents — ship a secure connector framework (OAuth/permissions/data access) so agents can integrate with Slack/Notion/GitHub/etc.</li>\n</ul>\n<ul>\n<li>Agent-facing external surfaces — own high-quality embedded experiences (desktop/extension/embeds) that let agents act in-context across tools</li>\n</ul>\n<ul>\n<li>Safety + abuse controls for agent actions — design permissioning, rate limits, and policy enforcement so agents can operate safely at scale</li>\n</ul>\n<ul>\n<li>Real-time notifications platform — design in-app/email surfaces + build reliable delivery/fanout, preferences, and observability</li>\n</ul>\n<ul>\n<li>Core web platform performance + caching — improve latency and reliability via caching strategy (Redis), profiling, and safe fallbacks</li>\n</ul>\n<ul>\n<li>Events + experimentation primitives — standardize tracking/metrics + feature flags/rollouts so teams can ship safely and measure impact</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ac007c05-251","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/af1dd557-3ed6-4be6-9756-c465ead52329","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$225K – $320K","x-skills-required":["TypeScript","React","Node.js","Postgres","Redis","Postgres","Workflow engines","Auth and enterprise identity","Abuse protection and edge systems","Cloud platforms","Observability"],"x-skills-preferred":["Temporal","Cloudflare","GCP","Datadog","Sentry","Statsig","Segment","analytics/event tracking"],"datePosted":"2026-03-07T15:19:44.784Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, React, Node.js, Postgres, Redis, Postgres, Workflow engines, Auth and enterprise identity, Abuse protection and edge systems, Cloud platforms, Observability, Temporal, Cloudflare, GCP, Datadog, Sentry, Statsig, Segment, analytics/event tracking","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":225000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c7fd6c0d-10c"},"title":"Senior Product Engineer, Product Platform","description":"<p>As a Staff Product Engineer on Replit&#39;s Product Platform team, you&#39;ll build the shared product systems and primitives that power Replit&#39;s core experiences — enabling product teams to ship faster and helping users (and agents) build better software.</p>\n<p>The nature of software development has changed and Replit is at the forefront of that revolution. The product platform team builds and scales the primitives that Replit Agent uses to empower over 40 million users to build anything they want.</p>\n<p>This role is ideal for a senior platform-minded web engineer who&#39;s shipped at scale, thrives in high-ownership environments, and can define what “good” looks like across reliability, performance, and developer experience.</p>\n<p>We&#39;ve hit a significant scale and have escape velocity. A number of our systems need to be scaled and rebuilt, so you&#39;ll get to build these out for 0 → 1 → huge scale quickly.</p>\n<p>You&#39;ll work closely with product other engineers, platform engineers, designers, product managers, and go-to-market partners to deliver foundational capabilities that unlock entire categories of product development.</p>\n<p>This is why we require product engineers with a strong product sense and past distributed systems experience that are excited about building at scale platform primitives!</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead major cross-team platform initiatives, taking foundational systems from 0 → 1 and scaling them to support millions of users</li>\n<li>Build shared, extensible Agent primitives that Replit Agent can reuse safely and consistently (Meta Programming)</li>\n<li>Identify the highest-leverage technical bottlenecks (performance, reliability, correctness, abuse, observability), then design and ship solutions for our scale</li>\n<li>Raise the bar for engineering excellence through architecture reviews, code quality, reliability standards, and mentorship</li>\n<li>Partner across teams to improve platform adoption, ergonomics, and velocity — turning platform work into measurable outcomes</li>\n</ul>\n<p><strong>Core areas you&#39;ll work on</strong></p>\n<ul>\n<li>Agents and Replit users depend on us to build applications (e.g. Connectors framework, Content/configuration primitives (CMS + product surfaces), Data/analytics/events + experimentation primitives)</li>\n<li>Replit Agent as a principal in third party systems. Agent can be fully used within ChatGPT and publishes straight to the iOS app store. We’ll be doing loads of that.</li>\n<li>Platform product teams rely on us to ship consistently (e.g. Identity &amp; Access platform (SSO/SCIM), Localization/i18n platform, Notifications &amp; communications platform)</li>\n<li>Core web platform infrastructure (e.g. performance &amp; page load optimization, observability and debugging workflows, caching strategy and reliability)</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of professional software engineering experience</li>\n<li>Understanding of the full agentic software development stack, helping coding agents build, test and review correct code</li>\n<li>Strong track record leading complex projects with cross-functional stakeholders</li>\n<li>Experience building and operating platform systems that other teams depend on</li>\n<li>Experience operating and scaling systems in production (reliability, performance, incidents, on-call readiness)</li>\n<li>Strong product judgment: you can balance UX, speed, correctness, and long-term maintainability</li>\n<li>Comfort working in modern web stacks such as TypeScript, React, Node.js, Postgres</li>\n</ul>\n<p><strong>Bonus points</strong></p>\n<ul>\n<li>Experience working in environments with a high engineering bar (or a fast-growing startup where you shipped fast _without_ burning out quality)</li>\n<li>Experience with platform and distributed systems patterns (queues, workflows, caching, rate limiting, async processing)</li>\n<li>Familiarity with systems like Redis, Postgres, Workflow engines (e.g. Temporal), Auth and enterprise identity (SSO, SCIM), Abuse protection and edge systems (Cloudflare), Cloud platforms (GCP), Observability (Datadog, Sentry), Localization, Experimentation and event pipelines (Statsig, Segment, analytics/event tracking)</li>\n<li>Excited about the future of programming, including agent workflows and developer tools</li>\n<li>Exposure to agent ecosystems (e.g. MCP-style patterns, tool integrations, structured automation)</li>\n</ul>\n<p><strong>Example Projects You’ll Work On</strong></p>\n<ul>\n<li>Connectors platform for agents — ship a secure connector framework (OAuth/permissions/data access) so agents can integrate with Slack/Notion/GitHub/etc.</li>\n<li>Agent-facing external surfaces — own high-quality embedded experiences (desktop/extension/embeds) that let agents act in-context across tools</li>\n<li>Safety + abuse controls for agent actions — design permissioning, rate limits, and policy enforcement so agents can operate safely at scale</li>\n<li>Real-time notifications platform — design in-app/email surfaces + build reliable delivery/fanout, preferences, and observability</li>\n<li>Core web platform performance + caching — improve latency and reliability via caching strategy (Redis), profiling, and safe fallbacks</li>\n<li>Events + experimentation primitives — standardize tracking/metrics + feature flags/rollouts so teams can ship safely and measure impact</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c7fd6c0d-10c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/fc946efb-f0f1-4f83-9ae1-055a11e7146b","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$225K – $320K","x-skills-required":["TypeScript","React","Node.js","Postgres","Redis","Postgres","Workflow engines","Auth and enterprise identity","Abuse protection and edge systems","Cloud platforms","Observability"],"x-skills-preferred":["Temporal","Cloudflare","GCP","Datadog","Sentry","Localization","Experimentation and event pipelines"],"datePosted":"2026-03-07T15:19:24.153Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, React, Node.js, Postgres, Redis, Postgres, Workflow engines, Auth and enterprise identity, Abuse protection and edge systems, Cloud platforms, Observability, Temporal, Cloudflare, GCP, Datadog, Sentry, Localization, Experimentation and event pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":225000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_138b24e2-2bd"},"title":"Senior Software Engineer, Anti-Abuse & Security","description":"<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>\n<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>\n<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>\n<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>\n<p>Do not invent information that is not in the original ad.</p>\n<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>\n<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>\n<p><strong>In this role you will…</strong></p>\n<ul>\n<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>\n<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>\n<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>\n<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>\n<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>\n<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>\n<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>\n<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>\n<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>\n</ul>\n<p><strong>Required skills and experience:</strong></p>\n<ul>\n<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>\n<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>\n<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>\n<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>\n<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>\n<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>\n<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>\n<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>\n<li>Background in fraud detection, payment abuse, or financial crime</li>\n<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>\n<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>\n<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>\n<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>\n<li><strong>Data:</strong> BigQuery, Hex</li>\n<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>\n<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>\n<li><strong>Infrastructure: GCP, Kubernetes</strong></li>\n<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>\n</ul>\n<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>\n<ul>\n<li>You prefer deep security research over building operational detection systems</li>\n<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>\n<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>\n<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>\n<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_138b24e2-2bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190K – $240K","x-skills-required":["security engineering","anti-abuse","trust & safety","fraud detection","Python","TypeScript","SQL","BigQuery","Hex","ML/LLM-based classifiers","prompt injection","jailbreaking","common attack patterns","phishing infrastructure","account takeover","credential stuffing","resource abuse"],"x-skills-preferred":["experience at a platform company","fraud detection","payment abuse","financial crime","device fingerprinting","IP reputation","email validation services","CI/CD security tooling","container security","Linux internals","cloud infrastructure"],"datePosted":"2026-03-07T15:19:04.069Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, anti-abuse, trust & safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93c50f21-80e"},"title":"Strategic Risk Analyst","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Strategic Risk Analyst</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$198K – $320K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analysing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p>We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.</p>\n<p><strong>About the role</strong></p>\n<p>As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesise internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritisation inputs</p>\n<p>You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Monitor and analyse internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.</li>\n</ul>\n<ul>\n<li>Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distil implications for OpenAI’s products and threat landscape.</li>\n</ul>\n<ul>\n<li>Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.</li>\n</ul>\n<ul>\n<li>Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early.</li>\n</ul>\n<ul>\n<li>Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.</li>\n</ul>\n<ul>\n<li>Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.</li>\n</ul>\n<ul>\n<li>Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.</li>\n</ul>\n<ul>\n<li>Build early-warning and monitoring capabilities with data, engineering, and visualisation partners, including dashboards that highlight leading indicators and unusual changes.</li>\n</ul>\n<ul>\n<li>Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.</li>\n</ul>\n<ul>\n<li>Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp.</li>\n</ul>\n<p><strong>You might thrive in this role if you</strong></p>\n<ul>\n<li>Significant experience (typically <strong>5+ years</strong>) in trust and safety, integrity, security, policy analysis, or intelligence work.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyse complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritised recommendations.</li>\n</ul>\n<ul>\n<li>Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.</li>\n</ul>\n<ul>\n<li>Comfort working across qualitative and quantitative inputs, including (1) casework,</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93c50f21-80e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/d821a725-671f-4327-b918-9be90ef7be45","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$198K – $320K • Offers Equity","x-skills-required":["trust and safety","integrity","security","policy analysis","intelligence work","online harms","AI-enabled misuse","harassment","coordinated abuse","scams","synthetic media","influence operations","brand safety issues"],"x-skills-preferred":["data analysis","data visualisation","machine learning","natural language processing","software development"],"datePosted":"2026-03-06T18:42:41.351Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, integrity, security, policy analysis, intelligence work, online harms, AI-enabled misuse, harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues, data analysis, data visualisation, machine learning, natural language processing, software development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b220ac50-0a0"},"title":"Technical Abuse Investigator","description":"<p><strong>Location</strong></p>\n<p>San Francisco; New York City; Remote - US</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$198K – $220K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.</p>\n<p>The Intelligence and Investigations team supports this mission by detecting, investigating, and disrupting the misuse of our products, particularly critical or novel harms. Our work enables partner teams to develop data-backed model policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, rewarding applications.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Technical Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting, investigating, and disrupting malicious use of OpenAI’s platform. You will further scale parts of the investigative process to help our team disrupt harm at scale. This role combines traditional investigative judgment with strong technical fluency: much of the work involves navigating complex datasets to surface actionable abuse signals, not just reviewing individual reports.</p>\n<p>In addition to conducting investigations directly, this role is explicitly designed to act as a force multiplier for the broader investigations team. You will be scaling or automating highly manual, important and nuanced processes. You will design and implement lightweight technical solutions—such as notebook templates, data pipelines or internal utilities—that enable specialized investigators to identify, track, and action abuse at a greater scale than a single investigator can currently achieve. Success in this role is measured not only by investigations completed, but by how effectively your work enables you and your team members to operate more efficiently and consistently.</p>\n<p>You will work closely with engineering, legal, investigations, security, and policy partners to respond to time-sensitive escalations, investigate activity that falls outside existing safeguards, and translate investigative insights into scalable detection and enforcement strategies.</p>\n<p>This role includes participation in an on-call rotation to handle urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise disturbing material. This role will work <strong>PST</strong> and is open to remote work within the United States, though we heavily prefer candidates based in San Francisco or New York.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Detect, investigate and disrupt abuse and harm with policy, legal, global affairs, security, and engineering teams via complex datasets.</li>\n</ul>\n<ul>\n<li>Develop and iterate on abuse signals and investigative methods, scaling one-off insights to reduce manual effort and expand coverage.</li>\n</ul>\n<ul>\n<li>Build and maintain lightweight technical solutions (e.g., SQL/ Python data pipelines, investigation templates, dashboards, or internal utilities) for investigators focused on specific harm domains.</li>\n</ul>\n<ul>\n<li>Develop a deep understanding of OpenAI’s products, data systems, and enforcement mechanisms, and collaborate with engineering and data teams to improve investigative tooling, data quality, and workflows.</li>\n</ul>\n<ul>\n<li>Communicate investigation findings effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries</li>\n</ul>\n<ul>\n<li>Rotate (in-frequently) into an incident response role that requires rapid threat triaging, investigation, mitigation, sound judgement and concise briefing to senior leadership.</li>\n</ul>\n<ul>\n<li>Be someone people enjoy working with.</li>\n</ul>\n<ul>\n<li>Proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have a strong background in computer science, software engineering, or a related field.</li>\n<li>Have experience with data analysis, machine learning, or other technical skills relevant to the role.</li>\n<li>Are able to work effectively in a fast-paced, dynamic environment.</li>\n<li>Are able to communicate complex technical information to non-technical stakeholders.</li>\n<li>Are able to work independently and as part of a team.</li>\n<li>Are able to adapt to changing priorities and deadlines.</li>\n<li>Are able to maintain confidentiality and handle sensitive information.</li>\n<li>Are able to work in a remote environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b220ac50-0a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/492ffc24-6b6e-4aa0-b31c-2a29a550b086","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$198K – $220K","x-skills-required":["computer science","software engineering","data analysis","machine learning","technical skills","investigative judgment","complex datasets","abuse signals","investigative methods","lightweight technical solutions","SQL","Python","data pipelines","investigation templates","dashboards","internal utilities","data systems","enforcement mechanisms","engineering","data teams","investigative tooling","data quality","workflows","written briefs","data-backed recommendations","escalation summaries","incident response","rapid threat triaging","investigation","mitigation","sound judgement","concise briefing","senior leadership","team dynamics","high-pressure environments"],"x-skills-preferred":["artificial intelligence","natural language processing","computer vision","deep learning","neural networks","data science","statistics","probability","mathematics","algorithm design","software development","testing","debugging","version control","agile development","scrum","kanban","project management","team leadership","communication","public speaking","writing","editing","proofreading"],"datePosted":"2026-03-06T18:40:49.522Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; New York City; Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"computer science, software engineering, data analysis, machine learning, technical skills, investigative judgment, complex datasets, abuse signals, investigative methods, lightweight technical solutions, SQL, Python, data pipelines, investigation templates, dashboards, internal utilities, data systems, enforcement mechanisms, engineering, data teams, investigative tooling, data quality, workflows, written briefs, data-backed recommendations, escalation summaries, incident response, rapid threat triaging, investigation, mitigation, sound judgement, concise briefing, senior leadership, team dynamics, high-pressure environments, artificial intelligence, natural language processing, computer vision, deep learning, neural networks, data science, statistics, probability, mathematics, algorithm design, software development, testing, debugging, version control, agile development, scrum, kanban, project management, team leadership, communication, public speaking, writing, editing, proofreading","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_036ee6e8-526"},"title":"Abuse Investigator (National Security)","description":"<p><strong>Location</strong></p>\n<p>Remote - US; London, UK; San Francisco; Seattle</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Remote</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230.4K – $425K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>\n<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting malicious uses and activities of our platform and disrupting actors that abuse our policies and other harmful behavior. This will require expert understanding of our products and data and experience investigating threat actors. You will also respond to time sensitive escalations, especially those that are not caught by our existing tools and safeguards.</p>\n<p>This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment. You’ll need a proven ability to quickly learn new processes, systems and team dynamics while thriving in ambiguous, rapidly changing, and high-pressure environments.</p>\n<p>This role is remote-friendly, though you’re welcome to work from our San Francisco office if desired. The role will include resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Investigate activity and disrupt abusive operations in partnership with our policy, legal, integrity, global affairs and security teams, including by conducting cross-internet and open source research</li>\n</ul>\n<ul>\n<li>Develop abuse signals and tracking strategies to help proactively detect harmful activity on our platform</li>\n</ul>\n<ul>\n<li>Communicate investigation findings from your work with stakeholders internally and, at times, externally</li>\n</ul>\n<ul>\n<li>Develop a categorical understanding of our products and data, and work with technical teams to improve our data and tooling</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep expertise in open source intelligence and subject matter expertise in national security and/or influence operations, particularly where it intersects with emerging technical risks.</li>\n</ul>\n<ul>\n<li>Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military, think tank setting, and/or tech company.</li>\n</ul>\n<ul>\n<li>Speak another language (ideally Chinese, Arabic, Farsi, Hindi), in addition to English.</li>\n</ul>\n<ul>\n<li>Have at least 10+ years of experience tracking threat actors in abuse domains.</li>\n</ul>\n<ul>\n<li>Have at least two years of experience helping to develop automated approaches to accomplishing your work.</li>\n</ul>\n<ul>\n<li>Experience in presenting analytic work in public or policy settings.</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models.</li>\n</ul>\n<ul>\n<li>Be able to hold a government security clearance.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_036ee6e8-526","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/aede8703-4ee6-437a-aa38-16787ed7f202","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230.4K – $425K","x-skills-required":["open source intelligence","SQL","Python","national security","influence operations","language models","government security clearance"],"x-skills-preferred":["technical investigations","cross-internet and open source research","abuse signals and tracking strategies","categorical understanding of products and data"],"datePosted":"2026-03-06T18:39:51.907Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; London, UK; San Francisco; Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"open source intelligence, SQL, Python, national security, influence operations, language models, government security clearance, technical investigations, cross-internet and open source research, abuse signals and tracking strategies, categorical understanding of products and data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230400,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_561e4146-713"},"title":"Senior Counsel, National Security and Platform Abuse","description":"<p><strong>Location</strong></p>\n<p>San Francisco; Washington, DC</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Legal</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$297K – $330K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong> OpenAI&#39;s Legal team plays a crucial role in furthering OpenAI&#39;s mission by tackling innovative, fundamental legal issues in AI. If you&#39;re passionate about doing significant and unique work as a technology lawyer, this team is for you. The team comprises professionals from diverse fields, including technology, AI, privacy, IP, corporate, employment, tax law, regulatory, and litigation.</p>\n<p><strong>About the Role</strong> OpenAI seeks a senior counsel to advise on national security and platform abuse legal matters. This is a unique opportunity to be at the forefront of the AI field and to contribute to combating threats to the safe development and deployment of AI systems. The ideal candidate will bring deep expertise in their core domain while demonstrating the flexibility and judgment to contribute across a wide range of compliance matters, investigations, and special projects. This role will work closely with our Legal, Security, Policy, Global Affairs, and Investigations teams and report to our Deputy General Counsel.</p>\n<p>This role will be based in our San Francisco, CA HQ or in Washington, DC. We use a hybrid HQ work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will be responsible for:</strong></p>\n<ul>\n<li>Engaging with internal stakeholders and outside counsel and providing the highest quality legal advice related to national security and platform abuse issues;</li>\n<li>Advising on fast-paced platform abuse investigations related to critical harm areas;</li>\n<li>Advising on legal and policy frameworks that support the responsible deployment of AI systems, balancing the protection of user privacy and trust with the need to detect, investigate, and mitigate harmful or abusive uses of the platform.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>10+ years of combined legal experience at fast-paced technology companies, technology-focused law firms, or relevant government agencies;</li>\n<li>A strong sense of ownership, are inquisitive and enthusiastic about technology, and can demonstrate sound judgment in ambiguous situations;</li>\n<li>Strong communication skills with the ability to convey complex legal principles clearly and concisely;</li>\n<li>Demonstrated ability to work collaboratively in a cross-functional environment;</li>\n<li>A JD and license or qualification to practice in your jurisdiction.</li>\n</ul>\n<p><strong>About OpenAI</strong> OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_561e4146-713","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/5caecd51-aa62-4eff-a191-3a60978b4969","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$297K – $330K • Offers Equity","x-skills-required":["national security","platform abuse","AI","technology","law","policy","compliance","investigations","special projects"],"x-skills-preferred":["JD","license to practice law","technology expertise","communication skills","collaboration"],"datePosted":"2026-03-06T18:39:15.326Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"national security, platform abuse, AI, technology, law, policy, compliance, investigations, special projects, JD, license to practice law, technology expertise, communication skills, collaboration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":297000,"maxValue":330000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0ccd7e3-4cb"},"title":"Data Scientist, Preparedness","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Data Scientist, Preparedness</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Data Science</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$347K – $400K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.</p>\n<p>Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.</p>\n<p>The mission of the Preparedness team is to:</p>\n<ol>\n<li>Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society</li>\n</ol>\n<ol>\n<li>Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems</li>\n</ol>\n<p>Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re hiring a Data Scientist to help build, evaluate, and continuously improve mitigations that prevent extreme harms from AI systems. This role is for an experienced, highly autonomous individual contributor who can take ambiguous problem statements, structure rigorous analyses, and translate findings into actionable product and policy changes.</p>\n<p>This position goes beyond “running evals.” You’ll help create mitigation intelligence and monitoring systems that enable OpenAI to detect issues early, measure effectiveness over time, and reduce both over-blocking (unnecessary friction) and under-blocking (missed harm).</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Evaluate and improve mitigation systems, including classifiers and detection pipelines across domains (e.g., biosecurity, cybersecurity, and emerging risk areas).</li>\n</ul>\n<ul>\n<li>Diagnose false positives and false negatives with deep error analysis, root cause investigation, and clear recommendations for mitigation adjustments.</li>\n</ul>\n<ul>\n<li>Build monitoring and measurement frameworks to track mitigation effectiveness over time and across user segments and use cases.</li>\n</ul>\n<ul>\n<li>Identify trends in over-blocking vs. under-blocking, quantify customer impact, and propose prioritized interventions.</li>\n</ul>\n<ul>\n<li>Develop insights from customer feedback, complaints, and usage patterns to detect shifts in adversarial behavior and system failure modes.</li>\n</ul>\n<ul>\n<li>Expand risk monitoring into new areas, including cybersecurity threats and model loss-of-control or sabotage scenarios, in partnership with domain experts.</li>\n</ul>\n<ul>\n<li>Communicate results to technical and executive stakeholders with crisp narratives, decision-ready metrics, and clear tradeoffs.</li>\n</ul>\n<p><strong>You might thrive in this role if you are:</strong></p>\n<ul>\n<li>An autonomous operator: you can take a problem statement and independently structure the analysis end-to-end.</li>\n</ul>\n<ul>\n<li>Strong at executive-ready communication: concise, clear, and outcome-oriented.</li>\n</ul>\n<ul>\n<li>Skilled in turning analysis into productable changes: you’re comfortable influencing across functions to drive mitigation improvements.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Significant experience in data science or applied analytics in high-stakes domains (e.g., security, trust &amp; safety, abuse prevention, fraud, platform integrity, or reliability).</li>\n</ul>\n<ul>\n<li>Strong foundations in experimentation, causal thinking, and/or observational inference; ability to design robust measurement under imperfect data.</li>\n</ul>\n<ul>\n<li>Fluency in SQL and Python (or equivalent) for analysis, modeling, and building monitoring workflows.</li>\n</ul>\n<ul>\n<li>Experience building metrics, dashboards, and operational monitoring that meaningfully changes outcomes (not just reporting).</li>\n</ul>\n<ul>\n<li>Track record of driving cross-functional impact with engineering, product, and research partners</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0ccd7e3-4cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/efcc3430-14c8-4022-8350-8146ffb867ab","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$347K – $400K • Offers Equity","x-skills-required":["data science","applied analytics","security","trust & safety","abuse prevention","fraud","platform integrity","reliability","SQL","Python","experimentation","causal thinking","observational inference","measurement","metrics","dashboards","operational monitoring"],"x-skills-preferred":["machine learning","deep learning","natural language processing","computer vision","data engineering","data architecture"],"datePosted":"2026-03-06T18:35:03.164Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data science, applied analytics, security, trust & safety, abuse prevention, fraud, platform integrity, reliability, SQL, Python, experimentation, causal thinking, observational inference, measurement, metrics, dashboards, operational monitoring, machine learning, deep learning, natural language processing, computer vision, data engineering, data architecture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":347000,"maxValue":400000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2dfc6c9-22d"},"title":"Trust & Safety Operations Analyst, Ads","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Trust &amp; Safety Operations Analyst, Ads</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$189K – $280K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>At OpenAI, our <strong>User Safety &amp; Risk Operations</strong> team is responsible for safeguarding our platform and users from abuse, fraud, and emerging threats. We operate at the intersection of product risk, operational scale, and real-time safety response—supporting users ranging from individuals to global enterprises, as well as advertisers and creators.</p>\n<p>The Ads Trust &amp; Safety Operations team protects our users, advertisers, and creators across all monetized surfaces. As OpenAI introduces new revenue-generating formats and partnerships, this team ensures these experiences remain safe, compliant, high-quality, and aligned with our broader safety standards. We partner closely with Product, Engineering, Policy, and Legal to identify emerging risks, build and mature enforcement systems, and ensure scalable, high-integrity operations.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We’re looking for a senior operator to help build and scale Ads Trust &amp; Safety Operations at OpenAI. In this role, you’ll drive critical Ads T&amp;S workstreams end-to-end, partnering closely with Product, Policy, Engineering, Legal, and Operations to design scalable enforcement processes, strengthen detection and tooling, and ensure we’re prepared to support Ads and monetization safely at scale.</p>\n<p>You’ll operate at the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.</p>\n<p>This role requires someone who is highly operational, excellent at execution, and comfortable driving clarity amid ambiguity. You should be eager to build scalable systems and processes from the ground up and work in lockstep with policy and product teams as we rapidly iterate on advertising strategies and features.</p>\n<p><strong><strong>In this role, you will:</strong></strong></p>\n<ul>\n<li>Own complex, high-impact Ads Trust &amp; Safety problem areas from strategy through execution.</li>\n</ul>\n<ul>\n<li>Design and scale operational workflows for Ads Trust &amp; Safety, including enforcement models, review processes, escalation paths, and quality frameworks.</li>\n</ul>\n<ul>\n<li>Partner closely with Product, Policy, and Engineering to translate risk and policy requirements into scalable systems, tooling, and automation.</li>\n</ul>\n<ul>\n<li>Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.</li>\n</ul>\n<ul>\n<li>Use data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals and solutions grounded in metrics and operational signals.</li>\n</ul>\n<ul>\n<li>Contribute to the evolution of Ads Trust &amp; Safety cross-functional strategy, including how safety scales with automation, classifiers, and self-service tooling.</li>\n</ul>\n<ul>\n<li>Act as a senior XFN partner and subject-matter expert, influencing direction through strong judgment, clear communication, and credibility.</li>\n</ul>\n<p><strong><strong>You might thrive in this role if you have:</strong></strong></p>\n<ul>\n<li>5+ years of experience in Trust &amp; Safety, Business Integrity, Fraud &amp; Abuse, Risk Operations, or a closely related domain.</li>\n</ul>\n<ul>\n<li>Deep familiarity with ads ecosystems and advertiser risk</li>\n</ul>\n<ul>\n<li>Proven ability to independently own ambiguous, cross-functional initiatives and drive them to completion.</li>\n</ul>\n<ul>\n<li>Strong operational judgment and systems thinking—able to design solutions that scale beyond manual review.</li>\n</ul>\n<ul>\n<li>Experience working closely with Product, Policy, and Engineering teams on enforcement systems, tooling, or automation.</li>\n</ul>\n<ul>\n<li>Comfort using data and operational metrics to inform decisions, prioritize work, and measure impact.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills, including the ability to explain complex risk tradeoffs to diverse audiences.</li>\n</ul>\n<ul>\n<li>Experience designing or partnering on automated enforcement, classifiers, or decision-support tools.</li>\n</ul>\n<p><strong><strong>About OpenAI</strong></strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits humanity. It was founded in 2015 and has since grown to become a leading player in the AI industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2dfc6c9-22d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c9e9e3a5-fb93-4162-b876-6266016819c0","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$189K – $280K","x-skills-required":["Trust & Safety","Business Integrity","Fraud & Abuse","Risk Operations","Ads ecosystems","advertiser risk","enforcement systems","tooling","automation","data","operational metrics","communication","risk tradeoffs","automated enforcement","classifiers","decision-support tools"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:33:25.010Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust & Safety, Business Integrity, Fraud & Abuse, Risk Operations, Ads ecosystems, advertiser risk, enforcement systems, tooling, automation, data, operational metrics, communication, risk tradeoffs, automated enforcement, classifiers, decision-support tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2aaccebf-892"},"title":"Fraud & Risk Analyst","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Fraud &amp; Risk Analyst</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$252K – $280K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Account &amp; Platform Integrity team protects OpenAI’s ecosystem from fraud, impersonation, abuse, and account-level threats. We ensure that the people and organizations using OpenAI are who they claim to be, that access is used appropriately, and that bad actors are prevented from exploiting the platform.</p>\n<p>We operate at the intersection of identity, access, compliance, and abuse prevention, working closely with Product, Engineering, Legal, Go-To-Market, and Support teams to stop harmful activity before it impacts users, customers, or the business. Our work directly protects revenue, user trust, and platform safety across ChatGPT, the API, and enterprise products.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re hiring a Fraud &amp; Risk Analyst to help safeguard OpenAI by investigating, validating, and monitoring customer accounts and organizations. You will focus on identity, legitimacy, and risk, ensuring accounts are properly verified, access is appropriate, and emerging threats are detected early.</p>\n<p>You’ll handle sensitive and high-stakes investigations involving fraud, impersonation, sanctions, misuse of access, and coordinated abuse. Your work will directly influence who can use OpenAI’s products and how safely we can scale.</p>\n<p>_<strong>Note: This role may involve reviewing sensitive, confidential, or disturbing content.</strong>_</p>\n<p>We use a hybrid work model of 3 days in the office per week in our San Francisco office.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Review and verify customer identities, organizations, and ownership structures</li>\n</ul>\n<ul>\n<li>Investigate suspicious or high-risk accounts (e.g., fraud, impersonation, shell companies, abuse of API or ChatGPT access)</li>\n</ul>\n<ul>\n<li>Evaluate documents, internal data, and third-party sources to determine legitimacy and risk</li>\n</ul>\n<ul>\n<li>Enforce account-level actions such as approvals, restrictions, suspensions, or escalations</li>\n</ul>\n<ul>\n<li>Serve as the case owner for complex, high-visibility verification and integrity cases</li>\n</ul>\n<ul>\n<li>Partner with Legal, Compliance, Sales, and Support to resolve issues quickly and accurately</li>\n</ul>\n<ul>\n<li>Handle escalations, appeals, and sensitive customer communications</li>\n</ul>\n<ul>\n<li>Help design and improve verification workflows, fraud detection, and risk-scoring systems</li>\n</ul>\n<ul>\n<li>Contribute to automation, tooling, and human-in-the-loop review pipelines</li>\n</ul>\n<ul>\n<li>Identify patterns of abuse and recommend new controls or safeguards</li>\n</ul>\n<ul>\n<li>Analyze data to uncover fraud and abuse trends</li>\n</ul>\n<ul>\n<li>Provide feedback to Product and Engineering to improve onboarding, verification, and access controls</li>\n</ul>\n<ul>\n<li>Create clear playbooks and guidance for frontline teams handling high-risk accounts</li>\n</ul>\n<p><strong>You Might Thrive In This Role If You...</strong></p>\n<ul>\n<li>Have 5+ years of experience in verifications, fraud, trust &amp; safety, or risk investigations</li>\n</ul>\n<ul>\n<li>Are comfortable making high-impact decisions about who should — or should not — have platform access</li>\n</ul>\n<ul>\n<li>Have experience working cross-functionally with Legal, Product, Sales, and Operations</li>\n</ul>\n<ul>\n<li>Enjoy building systems, not just running them — especially in fast-moving environments</li>\n</ul>\n<ul>\n<li>Are calm under pressure, detail-oriented, and trusted with sensitive and ambiguous cases</li>\n</ul>\n<ul>\n<li>Thrive in environments that require judgment, speed, and accountability</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2aaccebf-892","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2f13a80f-645e-44a6-9af8-f183d3409203","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$252K – $280K • Offers Equity","x-skills-required":["fraud","risk","investigations","identity","legitimacy","risk","access","compliance","abuse","prevention","data","analysis","trends","fraud","detection","risk-scoring","systems","automation","tooling","human-in-the-loop","review","pipelines","patterns","abuse","controls","safeguards","onboarding","verification","access","controls"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:33:12.581Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"fraud, risk, investigations, identity, legitimacy, risk, access, compliance, abuse, prevention, data, analysis, trends, fraud, detection, risk-scoring, systems, automation, tooling, human-in-the-loop, review, pipelines, patterns, abuse, controls, safeguards, onboarding, verification, access, controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_119df59e-db7"},"title":"Software Engineer, AI Safety","description":"<p><strong>Software Engineer, AI Safety</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$185K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models and their deployment in the real world.</p>\n<p>Building on the many years of our practical alignment work and applied safety efforts, Safety Systems addresses emerging safety issues and develops new fundamental solutions to enable the safe deployment of our most advanced models and future AGI, to make AI that is beneficial and trustworthy.</p>\n<p>Learn more about OpenAI’s approach to safety</p>\n<p><strong>About the Role</strong></p>\n<p>At OpenAI, we&#39;re dedicated to advancing artificial intelligence, and we know that creating a secure and reliable platform is vital to our mission. That&#39;s why we&#39;re seeking a software engineer to help us build out our trust and safety capabilities.</p>\n<p>In this role, you&#39;ll work with our entire engineering team to design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You&#39;ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Architect, build, and maintain anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.</li>\n</ul>\n<ul>\n<li>Work closely with our other engineers and researchers to utilize both industry standard and novel AI techniques to measure, monitor and improve AI models’ alignment to human values.</li>\n</ul>\n<ul>\n<li>Diagnose and remediate active incidents on the platform and build new tooling and infrastructure that address the root causes of system failure.</li>\n</ul>\n<p><strong>You might thrive in this role if:</strong></p>\n<ul>\n<li>You have built and run production services in a high growth, rapidly scaling environment.</li>\n</ul>\n<ul>\n<li>You can debug live issues and restore systems quickly.</li>\n</ul>\n<ul>\n<li>You have worked on content safety, fraud, or abuse, or are motivated and excited to work on present-day (“now-term”) AI safety.</li>\n</ul>\n<ul>\n<li>You have experience with Python or with modern languages such as C++, Rust, or Go, and are able to quickly ramp up on Python.</li>\n</ul>\n<ul>\n<li>You understand the trade-offs of capabilities and risks and navigate them to deploy novel products and features safely.</li>\n</ul>\n<ul>\n<li>You can critically assess risks of a new product or feature and devise innovative solutions to mitigate these risks without harming the product experience.</li>\n</ul>\n<ul>\n<li>You’re pragmatic. You know when to build a quick, good-enough fix, and when to invest in a robust, lasting solution.</li>\n</ul>\n<ul>\n<li>You possess strong project management skills. You are self-directed and can remove roadblocks to drive projects to completion with minimal guidance.</li>\n</ul>\n<ul>\n<li>You’ve deployed classifiers or machine learning models, or are excited to learn about modern ML infra.</li>\n</ul>\n<p><strong>Our tech stack</strong></p>\n<ul>\n<li>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills who understand the fundamental problems these tools solve, and can quickly pick up new tools and frameworks.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_119df59e-db7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/b9dee2a0-9bb3-447e-9bce-2b1bed784e5b","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$185K – $325K • Offers Equity","x-skills-required":["Python","Terraform","Kubernetes","Azure","Postgres","Kafka","C++","Rust","Go","Content safety","Fraud","Abuse","AI safety","Machine learning","Classifiers","ML infra"],"x-skills-preferred":["Project management","Debugging","System administration","Cloud computing","Containerization","DevOps","Agile development","Scrum","Kanban"],"datePosted":"2026-03-06T18:29:01.424Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Terraform, Kubernetes, Azure, Postgres, Kafka, C++, Rust, Go, Content safety, Fraud, Abuse, AI safety, Machine learning, Classifiers, ML infra, Project management, Debugging, System administration, Cloud computing, Containerization, DevOps, Agile development, Scrum, Kanban","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":185000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56bb0069-e56"},"title":"Software Engineer, Scaled Abuse","description":"<p><strong>Software Engineer, Scaled Abuse</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the team</strong></p>\n<p>The Applied team safely brings OpenAI&#39;s technology to the world. We released ChatGPT; Plugins; DALL·E; and the APIs for GPT-5, embeddings, and fine-tuning. We also operate inference infrastructure at scale. There&#39;s a lot more on the immediate horizon.</p>\n<p>Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. ChatGPT is a prime example of what is currently possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth.</p>\n<p><strong>About the role</strong></p>\n<p>The Scaled Abuse team protects OpenAI’s products and customers by detecting, preventing, and responding to fraudulent and abusive behavior at scale. We build and operate the backend and data systems that power real-time detection, investigation workflows, and enforcement — balancing strong protections with a great user experience as the platform grows.</p>\n<p>Our work sits at the intersection of engineering and abuse expertise: we partner closely with Trust &amp; Safety, Security, and Product to understand emerging attack patterns, translate messy signals into clear system behavior, and continuously harden our defenses. The problems are dynamic and ambiguous by default, so we value engineers who can quickly dive into an unfamiliar codebase, develop strong intuition about how it works end-to-end, and propose pragmatic improvements that make the entire stack more resilient.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience</li>\n</ul>\n<ul>\n<li>Work closely with finance, security, product, research, and trust &amp; safety operations to holistically combat fraudulent and abusive actors on our system</li>\n</ul>\n<ul>\n<li>Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries</li>\n</ul>\n<ul>\n<li>Utilize GPT-5 and future models to more effectively combat fraud and abuse</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have at least 5 years of software engineering experience in backend and data systems.</li>\n</ul>\n<ul>\n<li>Have at least 2 years experience in fraud or abuse analysis, investigation, and/or operations</li>\n</ul>\n<ul>\n<li>Can dive into our codebase, intuit how it works, and be able to have a strong intuition for suggestions that will lead us to a stronger engineering position.</li>\n</ul>\n<ul>\n<li>A voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with others</li>\n</ul>\n<ul>\n<li>Are comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessary</li>\n</ul>\n<ul>\n<li>Experience in Machine Learning techniques is a plus, but not required</li>\n</ul>\n<p><strong>Our tech stack</strong></p>\n<p>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills and the ability to quickly pick up new tools and frameworks.</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56bb0069-e56","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3c67f712-697d-48d8-b05c-01be896e61da","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K","x-skills-required":["software engineering","backend and data systems","fraud or abuse analysis","investigation and/or operations","GPT-5 and future models"],"x-skills-preferred":["Machine Learning techniques","Terraform","Kubernetes","Azure","Python","Postgres","Kafka"],"datePosted":"2026-03-06T18:25:36.660Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, backend and data systems, fraud or abuse analysis, investigation and/or operations, GPT-5 and future models, Machine Learning techniques, Terraform, Kubernetes, Azure, Python, Postgres, Kafka","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5e4dec3-129"},"title":"Principal Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Principal Applied Scientist at their Bengaluru office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising the advertising ecosystem. You&#39;ll work directly with leadership to shape the company&#39;s direction in the threat modelling and adversarial defence space.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Principal Applied Scientist, you will be responsible for developing and maintaining comprehensive adversarial frameworks to map the lifecycle of emerging threats, from account compromise (ATO) to malicious payload delivery. You will also advance the continuous, signal-based security protocol and research and implement behavioral biometrics and Proof of Liveness models to detect synthetic identities and coordinated fraud rings.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Develop and maintain comprehensive adversarial frameworks to map the lifecycle of emerging threats, from account compromise (ATO) to malicious payload delivery.</li>\n<li>Advance the continuous, signal-based security protocol.</li>\n<li>Research and implement behavioral biometrics and Proof of Liveness models to detect synthetic identities and coordinated fraud rings.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s, Master&#39;s, or PhD degree in Computer Science, Cybersecurity, Mathematics, or a related field, with 10+ years of related experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Deep technical expertise in Cybersecurity, Anti-Abuse, or Adversarial Machine Learning.</li>\n<li>Strong programming skills in C++ or Python (at least one is required), with experience in building production-quality security or ML systems.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and collaboration skills, with experience articulating complex security risks to business and product leadership.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5e4dec3-129","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-applied-scientist-9/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Cybersecurity","Anti-Abuse","Adversarial Machine Learning","C++","Python"],"x-skills-preferred":["Graph Neural Networks","Fraud ring detection","Behavioral biometrics"],"datePosted":"2026-03-06T07:26:33.904Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cybersecurity, Anti-Abuse, Adversarial Machine Learning, C++, Python, Graph Neural Networks, Fraud ring detection, Behavioral biometrics"}]}