{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/trust-and-safety"},"x-facet":{"type":"skill","slug":"trust-and-safety","display":"Trust And Safety","count":32},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_10bf8d86-b30"},"title":"Research Engineer, Safeguards Labs","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re hiring research engineers to define and execute the Labs research agenda. You&#39;ll scope your own projects, run experiments end-to-end, and decide when an idea is ready to hand off to a production team , or when to kill it and move on.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead and contribute to research projects investigating new methods for detecting misuse of Claude, identifying malicious organisations and accounts, strengthening model safeguards, and other safety needs.</li>\n</ul>\n<ul>\n<li>Design and run offline analyses over model usage data to surface abuse patterns, build classifiers and detection systems, and evaluate their effectiveness.</li>\n</ul>\n<ul>\n<li>Develop and iterate on prototypes that could eventually feed signals into the real-time safeguards path, partnering with engineers on tech transfer.</li>\n</ul>\n<ul>\n<li>Contribute to a broader research portfolio investigating methods for detecting abusive behaviour in chat-based or agentive workflows, and for training the model to robustly refrain from dangerous responses or behaviours without over-refusing.</li>\n</ul>\n<ul>\n<li>Build evaluations and methodologies for measuring whether safeguards actually work, including in agentic settings.</li>\n</ul>\n<ul>\n<li>Write up findings clearly so they inform decisions across Trust &amp; Safety, research, and product teams.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have a track record of independently driving research projects from ambiguous problem statements to concrete results, ideally in AI, ML, security, integrity, or a related technical field.</li>\n</ul>\n<ul>\n<li>Are comfortable scoping your own work and switching between research, engineering, and analysis as a project demands.</li>\n</ul>\n<ul>\n<li>Have working familiarity with how large language models operate , sampling, prompting, training , even if LLMs aren&#39;t your primary background.</li>\n</ul>\n<ul>\n<li>Are proficient in Python and comfortable working with large datasets.</li>\n</ul>\n<ul>\n<li>Care about the societal impacts of AI and want your work to directly reduce real-world harm.</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience building and training machine learning models, including classifiers for abuse, fraud, integrity, or security applications.</li>\n</ul>\n<ul>\n<li>Knowledge of evaluation methodologies for language models and experience designing evals.</li>\n</ul>\n<ul>\n<li>Experience with agentic environments and evaluating model behaviour in them.</li>\n</ul>\n<ul>\n<li>Background in trust and safety, integrity, fraud detection, threat intelligence, or adversarial ML.</li>\n</ul>\n<ul>\n<li>Experience with red teaming, jailbreak research, or interpretability methods like steering vectors.</li>\n</ul>\n<ul>\n<li>A history of taking research prototypes and transferring them into production systems.</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits</li>\n</ul>\n<ul>\n<li>Optional equity donation matching</li>\n</ul>\n<ul>\n<li>Generous vacation and parental leave</li>\n</ul>\n<ul>\n<li>Flexible working hours</li>\n</ul>\n<ul>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p><strong>Visa Sponsorship</strong></p>\n<ul>\n<li>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_10bf8d86-b30","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5191785008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["Python","Machine learning","Large language models","Security","Integrity"],"x-skills-preferred":["Experience building and training machine learning models","Knowledge of evaluation methodologies for language models","Experience with agentic environments","Background in trust and safety","Experience with red teaming"],"datePosted":"2026-04-18T15:55:10.055Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine learning, Large language models, Security, Integrity, Experience building and training machine learning models, Knowledge of evaluation methodologies for language models, Experience with agentic environments, Background in trust and safety, Experience with red teaming","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_051843ef-f93"},"title":"Vendor and Contract Manager, Safeguards","description":"<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships. This includes identifying and selecting vendors, contract negotiation, onboarding, ongoing performance management, and renewal.</p>\n<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>\n<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>\n<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>\n<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>\n<p>Responsibilities:</p>\n<p>Vendor Selection &amp; Onboarding - Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</p>\n<p>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</p>\n<p>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</p>\n<p>Contract &amp; Budget Management - Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</p>\n<p>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</p>\n<p>Handle invoicing, payment, and renewal processes with partners</p>\n<p>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</p>\n<p>Ongoing Vendor &amp; Partner Management - Manage vendor and researcher access to models and products during testing phases and trials</p>\n<p>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</p>\n<p>Report on vendor performance, spend, and contract status to Safeguards leadership</p>\n<p>You may be a good fit if you have:</p>\n<p>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</p>\n<p>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</p>\n<p>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</p>\n<p>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</p>\n<p>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</p>\n<p>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</p>\n<p>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</p>\n<p>Nice to have:</p>\n<p>Experience in AI safety or AI-adjacent vendor ecosystems</p>\n<p>Familiarity with procurement tools such as Ironclad or Zip</p>\n<p>Annual compensation range for this role is $245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_051843ef-f93","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156596008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["vendor management","procurement","contract operations","risk management","fraud prevention","compliance","trust and safety","AI safety","account abuse prevention","platform integrity","verification vendors","threat intelligence providers","content screening tools"],"x-skills-preferred":["Ironclad","Zip","research collaboration","civil society consultation","model red-teaming"],"datePosted":"2026-04-18T15:54:23.403Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, verification vendors, threat intelligence providers, content screening tools, Ironclad, Zip, research collaboration, civil society consultation, model red-teaming","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da06ef8d-890"},"title":"Vendor and Contract Manager, Safeguards","description":"<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships , from identifying and selecting vendors through contract negotiation, onboarding, ongoing performance management, and renewal.</p>\n<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>\n<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>\n<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>\n<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Vendor Selection &amp; Onboarding: Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</li>\n<li>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</li>\n<li>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</li>\n<li>Contract &amp; Budget Management: Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</li>\n<li>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</li>\n<li>Handle invoicing, payment, and renewal processes with partners</li>\n<li>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</li>\n<li>Ongoing Vendor &amp; Partner Management: Manage vendor and researcher access to models and products during testing phases and trials</li>\n<li>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</li>\n<li>Report on vendor performance, spend, and contract status to Safeguards leadership</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</li>\n<li>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</li>\n<li>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</li>\n<li>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</li>\n<li>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</li>\n<li>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</li>\n<li>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience in AI safety or AI-adjacent vendor ecosystems</li>\n<li>Familiarity with procurement tools such as Ironclad or Zip</li>\n</ul>\n<p><strong>Logistics:</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da06ef8d-890","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156596008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["vendor management","procurement","contract operations","risk management","fraud prevention","compliance","trust and safety","AI safety","account abuse prevention","platform integrity","cross-functional collaboration","ambiguity tolerance","fast-paced environments"],"x-skills-preferred":["AI safety vendor ecosystems","procurement tools","Ironclad","Zip"],"datePosted":"2026-04-18T15:53:59.839Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, cross-functional collaboration, ambiguity tolerance, fast-paced environments, AI safety vendor ecosystems, procurement tools, Ironclad, Zip","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_709f6628-a2b"},"title":"Safeguards Enforcement Analyst, Safety Evaluations","description":"<p>As a Safeguards Enforcement Analyst focused on Safety Evaluations, you&#39;ll play a central role in ensuring our models meet safety and policy standards before and after launch. You&#39;ll run and monitor evaluations, drive mitigations when issues surface, coordinate the creation of new evals, and help build the processes and documentation that allow the team to scale this work over time.</p>\n<p>This role requires someone who is detail-oriented, comfortable navigating ambiguity, and capable of coordinating across teams to break new ground and drive work to completion. This work is deeply cross-functional , you&#39;ll partner closely with policy experts, Safeguards engineering teams, and many other stakeholders throughout the organisation to ensure our evaluations are comprehensive and current, and that findings translate into meaningful improvements to model behaviour.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Support model launch readiness by running evaluations, monitoring and interpreting results, and surfacing regressions or unexpected behaviour changes to relevant stakeholders</li>\n</ul>\n<ul>\n<li>Partner closely with policy and domain experts throughout the evaluation lifecycle , from identifying risks and scoping the right evaluation approach, to coordinating creation of new evals and ensuring existing ones remain current with evolving policies, threat vectors, and model capabilities</li>\n</ul>\n<ul>\n<li>Work with cross-functional stakeholders to help manage evaluation outcomes, including interpreting results and driving mitigations where needed</li>\n</ul>\n<ul>\n<li>Think strategically about eval quality to build processes and eval paradigms that keep evaluations unsaturated, high-signal, and insightful as models improve</li>\n</ul>\n<ul>\n<li>Build out processes and frameworks for creating product-specific evaluations as Anthropic&#39;s product surface area expands</li>\n</ul>\n<ul>\n<li>Help design and scope tooling improvements that accommodate evolving eval needs and expand self-serve eval creation and iteration for non-technical users</li>\n</ul>\n<ul>\n<li>Write and maintain rigorous documentation for evaluation creation, execution, and interpretation as the team builds out eval tooling and processes</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have experience in trust and safety, content operations, policy enforcement, or a related operational role at a technology company</li>\n</ul>\n<ul>\n<li>Thrive in ambiguous, fast-moving environments , you&#39;re energised rather than frustrated when the path forward isn&#39;t clearly defined and you need to figure it out as you go</li>\n</ul>\n<ul>\n<li>Have experience building processes, workflows, or programmes from scratch (zero-to-one work), not just maintaining existing ones</li>\n</ul>\n<ul>\n<li>Have strong programme management instincts, naturally creating structure around complex, multi-stakeholder efforts by tracking timelines, dependencies, and deliverables to keep work on track</li>\n</ul>\n<ul>\n<li>Are eager to expand your technical toolkit, including adopting internal tools and AI-assisted workflows (e.g., Claude Code) to accelerate your work</li>\n</ul>\n<ul>\n<li>Can manage multiple concurrent workstreams across different domain areas without losing track of details , strong prioritisation and context-switching are essential when deadlines and priorities shift quickly</li>\n</ul>\n<ul>\n<li>Are a strong generalist comfortable moving fluidly across different types of work and switching contexts throughout the day</li>\n</ul>\n<ul>\n<li>Are comfortable making judgment calls with incomplete information and escalating appropriately when needed</li>\n</ul>\n<ul>\n<li>Communicate clearly and concisely, both in writing and cross-functionally</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience operating under tight, high-stakes timelines , such as product launch cycles, incident response, or regulatory deadlines , where information and priorities can shift with little notice</li>\n</ul>\n<ul>\n<li>Experience coordinating across engineering, policy, and product teams to translate findings into concrete action</li>\n</ul>\n<ul>\n<li>Experience building and maintaining SOPs, runbooks, and operational documentation in fast-changing environments</li>\n</ul>\n<ul>\n<li>Proficiency with data tools (SQL, dashboards, spreadsheets) sufficient to maintain and improve workflows</li>\n</ul>\n<ul>\n<li>Comfort working with sensitive content areas as part of eval creation or enforcement review responsibilities</li>\n</ul>\n<p>The annual compensation range for this role is $230,000-$270,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_709f6628-a2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5137183008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230,000-$270,000 USD","x-skills-required":["trust and safety","content operations","policy enforcement","programme management","technical toolkit","AI-assisted workflows","data tools","SQL","dashboards","spreadsheets"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:47.754Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC; San Francisco, CA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content operations, policy enforcement, programme management, technical toolkit, AI-assisted workflows, data tools, SQL, dashboards, spreadsheets","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f53caced-334"},"title":"Software Engineer, Cloud Inference Safeguards","description":"<p>We are seeking a Software Engineer to build and operate the safety, oversight, and intervention mechanisms that protect Claude on third-party cloud service provider (CSP) platforms.</p>\n<p>As the engineer responsible for Safeguards on those surfaces, you will ensure that every request served through our CSP partners is monitored for misuse, enforced against policy, and compliant with the data residency and privacy commitments that enterprise CSP customers expect.</p>\n<p>You will sit at the seam between the Safeguards organisation and the Cloud Inference team: taking classifiers, detection signals, and enforcement policies developed by Safeguards and making them run reliably inside a CSP partner&#39;s infrastructure at serving-path latency and scale.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build, deploy and operate real-time safeguards infrastructure,classifiers, rate limits, enforcement actions, and intervention hooks,embedded directly in the third-party CSP inference serving path</li>\n</ul>\n<ul>\n<li>Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring we can detect abuse and monitor model behaviour while honouring regionalisation boundaries and enterprise contractual commitments</li>\n</ul>\n<ul>\n<li>Develop telemetry, logging, and evaluation pipelines that give Safeguards, Policy, and T&amp;S operational teams situational awareness over CSP traffic and close the visibility gap between third-party and first-party serving</li>\n</ul>\n<ul>\n<li>Dive into the CSP serving stack to identify the lowest-impact points to gather signals or introduce interventions without degrading latency, stability, or overall architecture</li>\n</ul>\n<ul>\n<li>Hold a high operational bar: own on-call, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep Claude safe</li>\n</ul>\n<ul>\n<li>Work closely with Safeguards research, Policy &amp; Enforcement, the Cloud Inference team, and CSP partner contacts to turn detection research and policy decisions into production enforcement that works inside a partner&#39;s cloud.</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have a Bachelor&#39;s degree in Computer Science, Software Engineering, or comparable experience</li>\n</ul>\n<ul>\n<li>Have 4–10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust &amp; safety, anti-abuse, fraud, or integrity systems</li>\n</ul>\n<ul>\n<li>Are proficient in Python and comfortable working across the stack,from request-path services to data pipelines to internal tooling</li>\n</ul>\n<ul>\n<li>Think adversarially: you can see a system from a bad actor&#39;s perspective, anticipate how they will respond to countermeasures, and design defences in depth rather than single points of enforcement</li>\n</ul>\n<ul>\n<li>Have experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets</li>\n</ul>\n<ul>\n<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>\n</ul>\n<ul>\n<li>Have strong communication skills and can explain complex technical and risk tradeoffs to non-technical stakeholders across Policy, Legal, and partner organisations</li>\n</ul>\n<ul>\n<li>Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI space</li>\n</ul>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>Building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scale</li>\n</ul>\n<ul>\n<li>Machine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in production</li>\n</ul>\n<ul>\n<li>Major cloud platform internals,IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring,or experience shipping software that runs inside a partner&#39;s cloud rather than your own</li>\n</ul>\n<ul>\n<li>Data residency, privacy engineering, or compliance-constrained architectures, particularly where telemetry has to stay within regional or contractual boundaries</li>\n</ul>\n<ul>\n<li>Working closely with operational and human-review teams to build custom internal tooling, admin UX, and alerting</li>\n</ul>\n<ul>\n<li>Adversarial mindset: has shipped defences against motivated attackers before, knows what it feels like when they adapt, and can sprint to close a gap before it becomes an incident</li>\n</ul>\n<ul>\n<li>Comfortable operating at the intersection of platform/infra engineering and trust &amp; safety,neither a pure infra engineer nor a pure T&amp;S engineer, but someone who can credibly do both</li>\n</ul>\n<ul>\n<li>Has shipped software that runs inside someone else&#39;s infrastructure (partner cloud, embedded deployment, or similar) and knows how to get things done when you don&#39;t control the whole stack</li>\n</ul>\n<ul>\n<li>Senior enough to own a cross-team seam independently, drive consensus across orgs, and make latency/safety tradeoff calls without escalation</li>\n</ul>\n<ul>\n<li>TypeScript or Rust, and agentic coding tools such as Claude Code</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $405,000-$485,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f53caced-334","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5168829008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Cloud service provider (CSP)","Data residency and privacy","Machine learning serving infrastructure","Major cloud platform internals","Data residency, privacy engineering, or compliance-constrained architectures"],"x-skills-preferred":["TypeScript","Rust","Agentic coding tools","Claude Code","Trust and safety","Anti-abuse","Fraud","Integrity systems"],"datePosted":"2026-04-18T15:53:08.973Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Cloud service provider (CSP), Data residency and privacy, Machine learning serving infrastructure, Major cloud platform internals, Data residency, privacy engineering, or compliance-constrained architectures, TypeScript, Rust, Agentic coding tools, Claude Code, Trust and safety, Anti-abuse, Fraud, Integrity systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86d4c902-c89"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86d4c902-c89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis tools","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","non-consensual intimate imagery","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem working on these harms","open-source investigations or threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:52:37.777Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e95732e6-2ad"},"title":"Software Engineer, Account Abuse","description":"<p>About the role</p>\n<p>The Account Abuse team at Anthropic is tasked with ensuring the company&#39;s computing capacity is allocated fairly, minimizing resources available to bad actors and preventing them from coming back. As a software engineer on this team, you will build systems that gather and analyze signals at scale, balancing tradeoffs and coordinating closely with stakeholder teams throughout the company.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Think and respond quickly in a rapidly-changing greenfield environment</li>\n<li>Jump into other teams&#39; code to identify key points to gather signals or introduce interventions with minimal impact on their systems&#39; stability, complexity, or overall architecture</li>\n<li>Integrate with third-party data-enrichment vendors</li>\n<li>Create monitoring dashboards, alerts, and internal admin UX</li>\n<li>Work closely with data scientists to maintain situational awareness of current usage patterns and trends, and with the Policy &amp; Enforcement team to maximize the impact of their human-review availability</li>\n<li>Build robust and reliable multi-layered defenses</li>\n<li>Lead root cause analyses and deep-dive investigations into account activity to identify abuse patterns, uncover emerging attack vectors, and inform both immediate enforcement actions and longer-term systemic defenses</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection</li>\n<li>Proficiency in Python, SQL, and data analysis tools</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p>Preferred qualifications</p>\n<ul>\n<li>Experience building trust and safety mechanisms for and using AI/ML systems, such as fraud-detection models or security monitoring tools or the infrastructure to support these systems at scale</li>\n<li>Experience working closely with operational teams to build custom internal tooling</li>\n</ul>\n<p>Annual compensation range</p>\n<p>$320,000-$405,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e95732e6-2ad","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5123039008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","SQL","data analysis tools","software engineering","integrity","spam","fraud","abuse detection"],"x-skills-preferred":["trust and safety mechanisms","AI/ML systems","fraud-detection models","security monitoring tools","infrastructure"],"datePosted":"2026-04-18T15:52:06.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, data analysis tools, software engineering, integrity, spam, fraud, abuse detection, trust and safety mechanisms, AI/ML systems, fraud-detection models, security monitoring tools, infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e03253e3-c7f"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p><strong>Compensation:</strong></p>\n<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e03253e3-c7f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis","detection and review workflows","sensitive content","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem","open-source investigations","threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:45:00.507Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1421728f-e82"},"title":"Safeguards Analyst, Account Abuse","description":"<p>As a Safeguards Analyst focusing on Account Abuse, you will play a critical role in building and scaling the detection, enforcement, and operational capabilities that protect our platform against scaled abuse.</p>\n<p>You will develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators.</p>\n<p>You will develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse.</p>\n<p>You will evaluate, integrate, and operationalize third-party vendor signals , assessing whether new data sources provide genuine lift in detection.</p>\n<p>You will expand internal account signals with new data sources and behavioural indicators to improve detection coverage.</p>\n<p>You will build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness.</p>\n<p>You will operationalize and iterate on enforcement tooling , including appeals workflows, review processes, and user communications , to maintain quality and scale with growing volume.</p>\n<p>You will analyse enforcement performance through operational metrics, partnering with the team to keep detection accurate as abuse patterns evolve.</p>\n<p>You will manage payment fraud and dispute operations to protect revenue and maintain our standing with payment partners.</p>\n<p>You will coordinate enforcement efforts for policy compliance gaps across products, working with relevant teams to build scalable review processes.</p>\n<p>You will collaborate with cross-functional teams (Engineering, Product, Legal, Data Science) to surface new signals and translate detection capabilities into enforcement workflows.</p>\n<p>You will maintain detailed documentation of signal development, enforcement processes, and operational decisions.</p>\n<p>This role requires 2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement.</p>\n<p>You should have hands-on experience building detection systems, risk models, or enforcement processes and workflows.</p>\n<p>You should have experience evaluating and integrating third-party data sources into detection or scoring pipelines.</p>\n<p>You should have strong SQL and Python skills , this role involves heavy data analysis across complex, multi-table data relationships.</p>\n<p>You should have familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications.</p>\n<p>You should have demonstrated ability to analyse complex data problems and translate findings into actionable improvements.</p>\n<p>You should have strong written and verbal communication skills , ability to explain technical tradeoffs and navigate cross-functional stakeholder conversations.</p>\n<p>Equivalent practical experience or a Bachelor&#39;s degree in Computer Science, Data Science, or related field is required.</p>\n<p>This role offers an annual salary range of $230,000-$310,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1421728f-e82","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108841008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230,000-$310,000 USD","x-skills-required":["risk scoring","fraud detection","trust and safety","policy enforcement","SQL","Python","identity signals","device fingerprinting","account linking","entity resolution","appeals processes","customer-facing enforcement communications"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:38.446Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"risk scoring, fraud detection, trust and safety, policy enforcement, SQL, Python, identity signals, device fingerprinting, account linking, entity resolution, appeals processes, customer-facing enforcement communications","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d48f20ce-8dc"},"title":"Manager, Law Enforcement Response Team","description":"<p>As the Manager of the Law Enforcement Response Team, you will lead a diverse, global team in processing sensitive legal requests from start to finish, including document intake, processing, and follow-up.</p>\n<p>Experience in leading law enforcement response operations, managing global teams, and optimizing workflows is essential. You will ensure accurate, efficient processing while mentoring and promoting accountability and innovation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead complex projects that impact organisational performance, aligning with priorities to set goals and optimise results.</li>\n</ul>\n<ul>\n<li>Maintain knowledge of regulations, policies, and procedures; speak to regulatory bodies or auditors on workflows and operations.</li>\n</ul>\n<ul>\n<li>Strengthen the team via hiring, development plans, and growth opportunities.</li>\n</ul>\n<ul>\n<li>Adapt to high-demand changes, delegate effectively, stay calm under pressure, and motivate the team.</li>\n</ul>\n<ul>\n<li>Assess team needs in a dynamic legal landscape, emphasising user protection, transparency, policy development, scaling, and distributed collaboration via electronic tools.</li>\n</ul>\n<ul>\n<li>Communicate with internal stakeholders like Safety Leadership and Counsel on operations; update team on company changes.</li>\n</ul>\n<ul>\n<li>Analyze team performance with data visualisations and feedback; drive global QA strategy and consistency.</li>\n</ul>\n<ul>\n<li>Propose process improvements based on trends, owning segments for operational efficiency.</li>\n</ul>\n<ul>\n<li>Support sensitive escalations, answer policy questions, interpret metrics, and improve processes cross-functionally.</li>\n</ul>\n<ul>\n<li>Join on-call rotations for emergencies, addressing sensitive content (e.g., exploitation, violence) maturely.</li>\n</ul>\n<ul>\n<li>You may represent the company externally, such as in witness testimony or law enforcement interactions.</li>\n</ul>\n<ul>\n<li>Drive AI Automation: Collaborate with product engineering and data science teams to implement AI-based tools for case management, optimising workflows and reducing manual workloads.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>MA/MS Degree or equivalent in business, criminal justice, political science, law, international relations, or related field.</li>\n</ul>\n<ul>\n<li>10+ years in law enforcement response, trust and safety, compliance, or legal background.</li>\n</ul>\n<ul>\n<li>5+ years managing people, including global/distributed teams.</li>\n</ul>\n<ul>\n<li>Fluent in written and spoken English; excellent judgment, strategic thinking, and detail-orientation.</li>\n</ul>\n<ul>\n<li>Experience with legal request processing, scaling operations, and policy development.</li>\n</ul>\n<ul>\n<li>Exceptional communication, writing, and analytical skills; passion for privacy and expression.</li>\n</ul>\n<ul>\n<li>Perseverance, grit; ability to work through ambiguity.</li>\n</ul>\n<ul>\n<li>Availability to be on-call, including over weekends.</li>\n</ul>\n<ul>\n<li>Skilled in systems, software (e.g., Google Suite, data tools); must pass background check</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>8-10+ years in Safety/Legal Operations at tech/social media company.</li>\n</ul>\n<ul>\n<li>An active X user with a deep understanding of the platform’s role as a global public square and its Safety challenges.</li>\n</ul>\n<ul>\n<li>Able to work onsite in Bastrop and travel as needed to collaborate with global teams.</li>\n</ul>\n<p>ITAR Requirements:</p>\n<ul>\n<li>To conform to U.S. Government export regulations, applicant must be a (i) U.S. citizen or national, (ii) U.S. lawful, permanent resident (aka green card holder), (iii) Refugee under 8 U.S.C. § 1157, or (iv) Asylee under 8 U.S.C. § 1158, or be eligible to obtain the required authorisations from the U.S. Department of State. Learn more about the ITAR here.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d48f20ce-8dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.x.ai/","logo":"https://logos.yubhub.co/x.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4959528007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["law enforcement response","trust and safety","compliance","legal background","global/distributed teams","legal request processing","scaling operations","policy development","Google Suite","data tools"],"x-skills-preferred":["Safety/Legal Operations","X user","global public square","Safety challenges"],"datePosted":"2026-04-18T15:38:26.659Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"law enforcement response, trust and safety, compliance, legal background, global/distributed teams, legal request processing, scaling operations, policy development, Google Suite, data tools, Safety/Legal Operations, X user, global public square, Safety challenges"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f578503a-af9"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>\n<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>\n<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>\n<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>\n<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f578503a-af9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097904007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$43.75 - $62.50 USD hourly","x-skills-required":["Improving Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)","Online safety and reducing harm","Ethical reasoning and risk assessment","Data analysis"],"x-skills-preferred":["Experience working in a Trust and Safety for a social media company","Collaborating with child safety organizations","Red-teaming and adversarial testing of Large Language Models"],"datePosted":"2026-04-18T15:25:26.718Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f818897-404"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Process appeals, audit automations, and properly label use cases in the system.</li>\n<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>\n<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret and apply xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>\n<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>\n<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f818897-404","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097907007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM)","Child Sexual Exploitation (CSE)","Online safety","Risk assessment","Ethical reasoning","Data analysis","Automation tools","Social media","Generative AI"],"x-skills-preferred":["Red-teaming","Adversarial testing","Trust and Safety","Child safety organizations","Specialized detection tools","Classifier development"],"datePosted":"2026-04-18T15:25:17.446Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f1981394-2ef"},"title":"Senior Analyst, Safety Operations","description":"<p>About xAI</p>\n<p>xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</p>\n<p><strong>RESPONSIBILITIES:</strong></p>\n<ul>\n<li>Process appeals, audit automations, and label use cases in the system.</li>\n</ul>\n<ul>\n<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n</ul>\n<ul>\n<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n</ul>\n<ul>\n<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behaviour, as well as align Grok with our rules enforcement.</li>\n</ul>\n<p><strong>BASIC QUALIFICATIONS:</strong></p>\n<ul>\n<li>Expertise in improving Large Language Models (LLMs) to maximise efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>\n</ul>\n<ul>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n</ul>\n<ul>\n<li>Ability to interpret and apply xAI safety policies effectively.</li>\n</ul>\n<ul>\n<li>Proficiency in analysing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>\n</ul>\n<ul>\n<li>Strong ability to utilise resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>\n</ul>\n<ul>\n<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>\n</ul>\n<ul>\n<li>Commitment to continuous improvement of processes to prioritise safety and risk mitigation.</li>\n</ul>\n<ul>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>PREFERRED SKILLS AND EXPERIENCE:</strong></p>\n<ul>\n<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n</ul>\n<ul>\n<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p><strong>COMPENSATION AND BENEFITS:</strong></p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f1981394-2ef","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5093554007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","online safety","risk assessment","ethical reasoning","data analysis","enforcement effectiveness","platform safety"],"x-skills-preferred":["red-teaming","adversarial testing","Trust and Safety","AI or other automation tools"],"datePosted":"2026-04-18T15:25:07.932Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), online safety, risk assessment, ethical reasoning, data analysis, enforcement effectiveness, platform safety, red-teaming, adversarial testing, Trust and Safety, AI or other automation tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1811a69-c2f"},"title":"Manager, Safety Operations","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Manager, Safety Operations to oversee the processing of appeals and ensure proper labeling of use cases in the system.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Guide the team&#39;s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Ensure the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.</li>\n<li>Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.</li>\n<li>Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret, apply, and train teams on xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.</li>\n<li>Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.</li>\n<li>Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1811a69-c2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090695007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Leadership and people management experience in AI-driven operations","Expertise in improving Large Language Models (LLMs)","Proven experience in online safety and reducing harm","Ability to interpret, apply, and train teams on xAI safety policies","Proficiency in analyzing complex scenarios and operational metrics","Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills","Quality assurance: Ability to hold the team to our high standard for quality work","Commitment to continuous improvement of processes, people, and operations","Expertise in data analysis to identify emerging abuse vectors"],"x-skills-preferred":["Experience managing teams in Trust and Safety for a social media company","Expertise in leading red-teaming and adversarial testing of Large Language Models"],"datePosted":"2026-04-18T15:23:50.832Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Leadership and people management experience in AI-driven operations, Expertise in improving Large Language Models (LLMs), Proven experience in online safety and reducing harm, Ability to interpret, apply, and train teams on xAI safety policies, Proficiency in analyzing complex scenarios and operational metrics, Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills, Quality assurance: Ability to hold the team to our high standard for quality work, Commitment to continuous improvement of processes, people, and operations, Expertise in data analysis to identify emerging abuse vectors, Experience managing teams in Trust and Safety for a social media company, Expertise in leading red-teaming and adversarial testing of Large Language Models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_febb2521-ab0"},"title":"Global Safety Response Operations Analyst","description":"<p><strong>Compensation\\n\\nThe base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.\\n\\n- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts\\n\\n- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)\\n\\n- 401(k) retirement plan with employer match\\n\\n- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)\\n\\n- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees\\n\\n- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)\\n\\n- Mental health and wellness support\\n\\n- Employer-paid basic life and disability coverage\\n\\n- Annual learning and development stipend to fuel your professional growth\\n\\n- Daily meals in our offices, and meal delivery credits as eligible\\n\\n- Relocation support for eligible employees\\n\\n- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.\\n\\n## About the Team\\n\\nAt OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.\\n\\n## About the Role\\n\\nWe’re looking for experience Trust, Safety, and Risk Operations analysts who have subject matter expertise in one or more of the following areas: policy enforcement and content moderation, fraud and scam prevention, developer risk, or privacy and regulatory escalations. You’ll be on the front lines of safety escalation management, helping to triage and resolve urgent and sensitive cases. You’ll work across subject matter areas, systems, and processes to ensure operational excellence, develop process improvements and automations, and surface insights and trends.\\n\\n## In This Role, You Will:\\n\\n- Handle and resolve high-priority cases across all harm and risk areas, ensuring timely and appropriate resolution in line with policy and legal requirements.\\n\\n- Operate across multiple systems and tools to manage user reports and tickets, internal escalations, and other high priority investigations.\\n\\n- Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation.\\n\\n- Identify and implement process improvements and automation opportunities to increase efficiency, accuracy, and coverage.\\n\\n- Conduct quality reviews and provide feedback to improve consistency across global teams.\\n\\n- Analyze trends and generate insights from escalation and case data to inform policy, product, model behavior, or detection improvements.\\n\\n- Maintain exceptional accuracy, judgment, and composure under pressure when handling sensitive or time-critical situations.\\n\\n- Participate in 24/7 on-call rotation, including off-hours and weekend coverage as needed.\\n\\n## You Might Thrive in This Role If You:\\n\\n- Have 5+ years of experience in trust &amp; safety, content moderation, investigations, fraud, or developer risk operations.\\n\\n- Have experience working in incident response, law enforcement response, or escalations management.\\n\\n- Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact.\\n\\n- Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks.\\n\\n- Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes.\\n\\n- Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence.\\n\\n- Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts.\\n\\n- Thrive in ambiguous, high-autonomy environments and balance speed with diligence.\\n\\n- Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact.\\n\\n## About OpenAI\\n\\nOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.\\n\\n## Required Skills\\n\\n- Trust and Safety\\n\\n- Content Moderation\\n\\n- Investigations\\n\\n- Fraud and Scam Prevention\\n\\n- Developer Risk\\n\\n- Privacy and Regulatory Escalations\\n\\n## Preferred Skills\\n\\n- Incident Response\\n\\n- Law Enforcement Response\\n\\n- Escalations Management\\n\\n- OpenAI Technology\\n\\n- Deep Domain Expertise\\n\\n- Analytical Skills\\n\\n- Communication Skills\\n\\</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_febb2521-ab0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9e77e103-6b70-4b45-a344-d87c4a2d7e12","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$189K – $280K","x-skills-required":["Trust and Safety","Content Moderation","Investigations","Fraud and Scam Prevention","Developer Risk","Privacy and Regulatory Escalations"],"x-skills-preferred":["Incident Response","Law Enforcement Response","Escalations Management","OpenAI Technology","Deep Domain Expertise","Analytical Skills","Communication Skills"],"datePosted":"2026-03-08T22:16:27.787Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust and Safety, Content Moderation, Investigations, Fraud and Scam Prevention, Developer Risk, Privacy and Regulatory Escalations, Incident Response, Law Enforcement Response, Escalations Management, OpenAI Technology, Deep Domain Expertise, Analytical Skills, Communication Skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b6169e99-a3e"},"title":"Safeguards Analyst, Account Abuse","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and society as a whole. As a Safeguards Analyst focusing on Account Abuse, you will play a critical role in building and scaling the detection, enforcement, and operational capabilities that protect our platform against scaled abuse.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators</li>\n<li>Develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse</li>\n<li>Evaluate, integrate, and operationalize third-party vendor signals — assessing whether new data sources provide genuine lift in detection</li>\n<li>Expand internal account signals with new data sources and behavioural indicators to improve detection coverage</li>\n<li>Build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness</li>\n<li>Operationalize and iterate on enforcement tooling — including appeals workflows, review processes, and user communications — to maintain quality and scale with growing volume</li>\n<li>Analyze enforcement performance through operational metrics, partnering with the team to keep detection accurate as abuse patterns evolve</li>\n<li>Manage payment fraud and dispute operations to protect revenue and maintain our standing with payment partners</li>\n<li>Coordinate enforcement efforts for policy compliance gaps across products, working with relevant teams to build scalable review processes</li>\n<li>Collaborate with cross-functional teams (Engineering, Product, Legal, Data Science) to surface new signals and translate detection capabilities into enforcement workflows</li>\n<li>Maintain detailed documentation of signal development, enforcement processes, and operational decisions</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement</li>\n<li>Hands-on experience building detection systems, risk models, or enforcement processes and workflows</li>\n<li>Experience evaluating and integrating third-party data sources into detection or scoring pipelines</li>\n<li>Strong SQL and Python skills — this role involves heavy data analysis across complex, multi-table data relationships</li>\n<li>Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications</li>\n<li>Demonstrated ability to analyze complex data problems and translate findings into actionable improvements</li>\n<li>Strong written and verbal communication skills — ability to explain technical tradeoffs and navigate cross-functional stakeholder conversations</li>\n<li>Equivalent practical experience or a Bachelor&#39;s degree in Computer Science, Data Science, or related field</li>\n</ul>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have built risk scores, detection systems, signal pipelines, or enforcement processes in a previous role — identity verification, trust and safety, or similar</li>\n<li>Are comfortable working with ambiguous, noisy data and extracting meaningful signal</li>\n<li>Think critically about signal quality and enforcement performance — evaluating whether new detection signals or processes meaningfully improve outcomes</li>\n<li>Have experience with graph-based data, account-linking problems, or cross-functional process design</li>\n<li>Are proactive about identifying gaps in existing detection or enforcement and proposing new approaches</li>\n<li>Have experience leveraging generative AI tools to support analytical, detection, or enforcement workflows</li>\n<li>Can balance deep analytical work with cross-functional collaboration and stakeholder coordination</li>\n<li>Have a background or interest in cybersecurity or threat intelligence (a plus, not a requirement)</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b6169e99-a3e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108841008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230,000 - $310,000USD","x-skills-required":["risk scoring","fraud detection","trust and safety","policy enforcement","SQL","Python","graph-based data infrastructure","identity signals","device fingerprinting","account linking","entity resolution","appeals processes","customer-facing enforcement communications"],"x-skills-preferred":["generative AI tools","cross-functional process design","cybersecurity","threat intelligence"],"datePosted":"2026-03-08T14:00:53.781Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"risk scoring, fraud detection, trust and safety, policy enforcement, SQL, Python, graph-based data infrastructure, identity signals, device fingerprinting, account linking, entity resolution, appeals processes, customer-facing enforcement communications, generative AI tools, cross-functional process design, cybersecurity, threat intelligence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7df914df-096"},"title":"Software Engineer, Safeguards","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are seeking a Software Engineer, Safeguards to join our team. As a Software Engineer, Safeguards, you will be responsible for developing monitoring systems to detect unwanted behaviours from our API partners and potentially taking automated enforcement actions. You will also be responsible for building abuse detection mechanisms and infrastructure, as well as surfacing abuse patterns to our research teams to harden models at the training stage.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Develop monitoring systems to detect unwanted behaviours from our API partners and potentially take automated enforcement actions; surface these in internal dashboards to analysts for manual review</li>\n<li>Build abuse detection mechanisms and infrastructure</li>\n<li>Surface abuse patterns to our research teams to harden models at the training stage</li>\n<li>Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>5-10+ years of experience in a software engineering position, preferably with a focus on integrity, spam, fraud, or abuse detection and mitigation</li>\n<li>Proficiency in Python and Typescript</li>\n<li>Ability to work across the stack</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience building trust and safety detection mechanisms and intervention for AI/ML systems</li>\n<li>Have experience with prompt engineering, jailbreak attacks, and other adversarial inputs</li>\n<li>Have worked closely with operational teams to build custom internal tooling</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Interested in building your career at Anthropic? Get future opportunities sent straight to your email.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7df914df-096","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4951844008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $425,000 USD","x-skills-required":["Python","Typescript","Software Engineering","Abuse Detection","Machine Learning"],"x-skills-preferred":["Prompt Engineering","Jailbreak Attacks","Adversarial Inputs","Trust and Safety Detection"],"datePosted":"2026-03-08T13:51:12.817Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Typescript, Software Engineering, Abuse Detection, Machine Learning, Prompt Engineering, Jailbreak Attacks, Adversarial Inputs, Trust and Safety Detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_813dd0ec-e42"},"title":"Software Engineer, Safeguards Infrastructure","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for software engineers to help build the foundational pieces for safety, oversight and intervention mechanisms of our AI systems. As a software engineer on the Safeguards team, you will work to monitor models, prevent misuse, and ensure user well-being. This role will focus on building systems to detect unwanted model behaviors and prevent disallowed use of models. You will apply your technical skills to uphold our principles of safety, transparency, and oversight.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Develop the foundational systems which power Safeguards, including infrastructure for data storage and management, metric and evaluation systems, and tooling for human and agentic review.</li>\n<li>Ensure the day-to-day running of Safeguards systems and hold a high operational bar which serves both safety and customers while reducing the amount of human intervention and oversight required.</li>\n<li>Build robust and reliable multi-layered defenses for real-time improvement of safety mechanisms that work at scale</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Software Engineering or comparable experience</li>\n<li>4-10+ years of experience in a software engineering position</li>\n<li>Proficiency in Python</li>\n<li>Ability to work across the stack</li>\n<li>Strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience building trust and safety, anti-spam, fraud or abuse detection and mitigation mechanisms and interventions for AI/ML systems</li>\n<li>Have experience building metrics and measurement systems or data and privacy management systems</li>\n<li>Have worked closely with operational teams to build custom internal tooling</li>\n<li>Be proficient in TypeScript or Rust</li>\n<li>Have experience with Claude Code or similar agentic coding tools</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. <strong>Guidance on Candidates&#39; AI Usage:</strong> Learn about our policy for using AI in our application process</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_813dd0ec-e42","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5074908008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£255,000 - £325,000GBP","x-skills-required":["Python","Software Engineering","Computer Science","Data Storage and Management","Metric and Evaluation Systems","Tooling for Human and Agentic Review"],"x-skills-preferred":["TypeScript","Rust","Claude Code","Agentic Coding Tools","Trust and Safety","Anti-Spam","Fraud or Abuse Detection and Mitigation Mechanisms"],"datePosted":"2026-03-08T13:47:56.482Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Software Engineering, Computer Science, Data Storage and Management, Metric and Evaluation Systems, Tooling for Human and Agentic Review, TypeScript, Rust, Claude Code, Agentic Coding Tools, Trust and Safety, Anti-Spam, Fraud or Abuse Detection and Mitigation Mechanisms","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":255000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_efe846b7-441"},"title":"Enforcement Operations Lead","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p>About the Role</p>\n<p>Anthropic&#39;s Safeguards team is responsible for enforcing our policies, protecting users, and ensuring our platform is not misused. As a Safeguards Enforcement Analyst focused on Safety Evaluations, you&#39;ll play a central role in ensuring our models meet safety and policy standards before and after launch. You&#39;ll run and monitor evaluations, drive mitigations when issues surface, coordinate the creation of new evals, and help build the processes and documentation that allow the team to scale this work over time.</p>\n<p>This role requires someone who is detail-oriented, comfortable navigating ambiguity, and capable of coordinating across teams to break new ground and drive work to completion. This work is deeply cross-functional — you&#39;ll partner closely with policy experts, Safeguards engineering teams, and many other stakeholders throughout the organization to ensure our evaluations are comprehensive and current, and that findings translate into meaningful improvements to model behavior.</p>\n<p>Responsibilities</p>\n<p><em>Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</em></p>\n<p><strong>Vendor Operations</strong></p>\n<ul>\n<li>Own end-to-end management of content moderation vendor relationships, including onboarding, performance management, quality assurance, and capacity planning</li>\n<li>Partner with internal stakeholders to define vendor scope, set SLAs, and evaluate vendor output quality on an ongoing basis</li>\n<li>Identify opportunities to scale content review operations efficiently as Anthropic&#39;s product surface area grows</li>\n<li>Develop and maintain standard operating procedures (SOPs) for all vendor-executed review workflows, ensuring consistency and accuracy across content</li>\n</ul>\n<p><strong>Regulatory Reporting and Enforcement</strong></p>\n<ul>\n<li>Partner with Regulatory Operations to ensure that new product features and content surfaces are incorporated into Safeguards reporting workflows as they launch</li>\n<li>Own enforcement reporting for Regulatory Operations requirements, including maintaining and updating dashboards and tracking mechanisms that provide accurate, timely data to regulatory bodies</li>\n<li>Produce on-request read-outs of enforcement metrics over specified time ranges to support regulatory reporting obligations</li>\n<li>Identify and drive improvements to existing reporting infrastructure — including transitioning manual, spreadsheet-based workflows to more robust and scalable solutions</li>\n<li>Oversee the user-reported content review pipeline, including reviews submitted via the Content Reporting Form across all supported content surfaces</li>\n<li>Ensure SOPs for content review workflows are kept current as new features and surfaces are added</li>\n<li>Work collaboratively with the RegOps team to ensure intake processes are prepared to handle emerging report types (e.g., third-party MCP server reports)</li>\n<li>Maintain a strong understanding of Anthropic&#39;s policy framework to provide informed operational guidance and escalation support</li>\n</ul>\n<p><strong>Copyright Operations</strong></p>\n<ul>\n<li>Oversee Safeguards copyright systems, ensuring the right operational processes are in place to handle copyright-related enforcement at scale</li>\n<li>Partner closely with the Regulatory Operations team to scale copyright operations as Anthropic&#39;s products grow, with a particular focus on reducing false positives and improving the accuracy of copyright enforcement workflows</li>\n<li>Identify gaps in current copyright operational processes and drive cross-functional solutions in collaboration with policy, legal, and engineering stakeholders</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 5+ years of experience in trust and safety operations, content moderation program management, or a related field</li>\n<li>Have managed external vendor or contractor relationships, including performance management and quality assurance</li>\n<li>Are comfortable working across policy, legal, and operations teams to translate compliance requirements into practical workflows</li>\n<li>Have experience building or improving operational reporting, dashboards, or enforcement tracking systems</li>\n<li>Are highly organized, with a track record of maintaining rigorous documentation and SOPs in fast-moving environments</li>\n<li>Communicate clearly and precisely — both in writing and verbally — across technical and non-technical audiences</li>\n<li>Are energized by the challenge of building scalable systems in an environment where not everything is already figured out</li>\n<li>Care deeply about the responsible deployment of AI and the role enforcement operations plays in that mission</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience working with regulatory reporting requirements, particularly in the context of online platforms or AI systems</li>\n<li>Familiarity with content moderation tooling and review workflows at scale</li>\n<li>Experience with copyright enforcement operations, including false positive mitigation strategies</li>\n<li>Background in policy enforcement, legal operations, or compliance program management</li>\n<li>Experience supporting or standing up a new operational function, including writing foundational SOPs and building institutional knowledge from scratch</li>\n<li>Comfort working with data and metrics to inform operational decisions and surface trends to leadership</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_efe846b7-441","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5137185008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The annual compensation range for this role is listed below.","x-skills-required":["trust and safety operations","content moderation program management","vendor relationship management","regulatory reporting","copyright enforcement","policy enforcement","legal operations","compliance program management"],"x-skills-preferred":["content moderation tooling","review workflows at scale","regulatory reporting requirements","AI systems"],"datePosted":"2026-03-08T13:42:56.321Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"trust and safety operations, content moderation program management, vendor relationship management, regulatory reporting, copyright enforcement, policy enforcement, legal operations, compliance program management, content moderation tooling, review workflows at scale, regulatory reporting requirements, AI systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93c50f21-80e"},"title":"Strategic Risk Analyst","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Strategic Risk Analyst</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$198K – $320K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analysing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p>We are building a horizontal “radar” for AI abuse and strategic risk—correlating internal signals, external intelligence, and real-world events into clear, actionable priorities for OpenAI’s safety and product decision-makers.</p>\n<p><strong>About the role</strong></p>\n<p>As a Strategic Risk Analyst, you will help develop and maintain our central view of strategic risk across OpenAI’s products and platforms. You will synthesise internal abuse patterns, upstream and external intelligence, and product and conversational signals into decision-ready risk insights, recurring briefs, and practical prioritisation inputs</p>\n<p>You will partner closely with investigators, engineers, and policy and trust and safety counterparts, as well as measurement and forecasting teammates, to translate messy signals into structured judgments (including assumptions and confidence), ranked priorities, and actionable recommendations. This is an opportunity to do high-leverage analysis in a fast-moving environment, where crisp thinking and communication directly shape safety decisions, mitigations, and product readiness.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Monitor and analyse internal risk signals (abuse telemetry, investigations outputs, model and product signals) to identify trends, shifts in tactics, and new abuse patterns.</li>\n</ul>\n<ul>\n<li>Conduct upstream and external scanning (OSINT, ecosystem developments, real-world events) and distil implications for OpenAI’s products and threat landscape.</li>\n</ul>\n<ul>\n<li>Identify and deep dive into harms and misuse across products and channels, turning messy signals into clear analytic findings.</li>\n</ul>\n<ul>\n<li>Connect individual incidents into system-level narratives about actors, incentives, product design weaknesses, and cross-product spillover—pressure-testing hypotheses early.</li>\n</ul>\n<ul>\n<li>Produce concise, decision-ready risk briefs and intelligence estimates with explicit assumptions, confidence levels, and what would change the assessment.</li>\n</ul>\n<ul>\n<li>Convert analysis into clear, ranked priorities and actionable recommendations that product, safety, and policy teams can execute on.</li>\n</ul>\n<ul>\n<li>Define and track key risk indicators and outcome metrics to evaluate whether mitigations are working and drive course corrections when needed.</li>\n</ul>\n<ul>\n<li>Build early-warning and monitoring capabilities with data, engineering, and visualisation partners, including dashboards that highlight leading indicators and unusual changes.</li>\n</ul>\n<ul>\n<li>Contribute to product readiness and launch reviews; develop reusable playbooks, FAQs, and briefing materials that help teams respond consistently.</li>\n</ul>\n<ul>\n<li>Drive cross-functional alignment by tailoring readouts to investigations, engineering, policy, trust and safety, and product stakeholders—and ensuring decisions and follow-ups are crisp.</li>\n</ul>\n<p><strong>You might thrive in this role if you</strong></p>\n<ul>\n<li>Significant experience (typically <strong>5+ years</strong>) in trust and safety, integrity, security, policy analysis, or intelligence work.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyse complex online harms and AI-enabled misuse (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritised recommendations.</li>\n</ul>\n<ul>\n<li>Strong analytical craft: you can identify weak signals, form hypotheses, test them quickly, state assumptions explicitly, and communicate confidence and uncertainty clearly.</li>\n</ul>\n<ul>\n<li>Comfort working across qualitative and quantitative inputs, including (1) casework,</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93c50f21-80e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/d821a725-671f-4327-b918-9be90ef7be45","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$198K – $320K • Offers Equity","x-skills-required":["trust and safety","integrity","security","policy analysis","intelligence work","online harms","AI-enabled misuse","harassment","coordinated abuse","scams","synthetic media","influence operations","brand safety issues"],"x-skills-preferred":["data analysis","data visualisation","machine learning","natural language processing","software development"],"datePosted":"2026-03-06T18:42:41.351Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, integrity, security, policy analysis, intelligence work, online harms, AI-enabled misuse, harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues, data analysis, data visualisation, machine learning, natural language processing, software development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a083d419-42f"},"title":"Manager, Protection Scientist Engineer, Intelligence and Investigations","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Manager, Protection Scientist Engineer, Intelligence and Investigations</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$288K – $425K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>\n<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>\n<p><strong>About the Role</strong></p>\n<p>Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a manager of Protection Scientist Engineers, you will lead a small but growing team of PSEs who design and build systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. The team also responds to and investigates critical escalations, especially those that are not caught by our existing safety systems. The team, and you, will leverage expert understanding of our products and data, and work cross-functionally with product, policy, and scaled engineering teams.</p>\n<p>This role is based in our San Francisco office and may involve resolving urgent escalations outside of normal work hours. Some investigations and work may involve sensitive content, including sexual, child safety, violent, or otherwise-disturbing material.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Lead, manage, and support an interdisciplinary and technical team across US timezones, working with them to chart longer term strategies.</li>\n</ul>\n<ul>\n<li>Leverage your expertise in designing, launching, and improving systems of defense to mentor the team and develop both technical and organisational solutions.</li>\n</ul>\n<ul>\n<li>Identify and create opportunities to enhance and make more efficient the team’s core work through understanding both cross-functional and technical challenges.</li>\n</ul>\n<ul>\n<li>Review, and at times design and participate in, the implementation of abuse detection, review, and enforcement for new product launches and major harms.</li>\n</ul>\n<ul>\n<li>Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.</li>\n</ul>\n<ul>\n<li>Communicate the work of the team, at times externally, and coordinate with cross-functional partners to scope and prioritize work across abuse response.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have at least two years of experience managing or tech leading teams that fight threats at a tech company and/or government organization.</li>\n</ul>\n<ul>\n<li>Have deep experience with technical analysis and harm detection, especially using SQL and Python.</li>\n</ul>\n<ul>\n<li>Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.</li>\n</ul>\n<ul>\n<li>Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human ne</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a083d419-42f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/336f3864-ccbf-44eb-bfc9-8090432a8e6f","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$288K – $425K • Offers Equity","x-skills-required":["SQL","Python","data science","machine learning","investigation","policy/protocol development","team management","technical analysis","harm detection","trust and safety","policy enforcement","engineering","data engineering","software development"],"x-skills-preferred":["language models","scaled tooling","cross-functional collaboration","prioritization","communication"],"datePosted":"2026-03-06T18:40:00.329Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, data science, machine learning, investigation, policy/protocol development, team management, technical analysis, harm detection, trust and safety, policy enforcement, engineering, data engineering, software development, language models, scaled tooling, cross-functional collaboration, prioritization, communication","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":288000,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a3da0b4-614"},"title":"Data Visualization Analyst","description":"<p><strong>Data Visualization Analyst</strong></p>\n<p><strong>About the team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analysing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for a data visualization analyst who can turn complex intelligence and forecast outputs into clear, interactive visual stories that shape how OpenAI manages risk. In this role, you build the dashboards, maps, and visual frameworks that leadership and partners rely on to understand where risks are emerging, how they evolve over time, and which mitigations matter most.</p>\n<p><strong>In this role, you will</strong></p>\n<p>Design and build dashboards and interactive visualizations that track:</p>\n<ul>\n<li>Emerging safety and abuse risks across products and geographies.</li>\n</ul>\n<ul>\n<li>Geopolitical developments that affect OpenAI&#39;s risk posture.</li>\n</ul>\n<ul>\n<li>Key abuse cases and incident trends using both structured and unstructured data.</li>\n</ul>\n<p>Develop interactive risk views such as:</p>\n<ul>\n<li>Prioritization heat maps for safety and abuse scenarios.</li>\n</ul>\n<ul>\n<li>&#39;Emerging risks&#39; boards with drill-downs for deep dives.</li>\n</ul>\n<ul>\n<li>Forecast views of risk trajectories, including scenario comparisons over time.</li>\n</ul>\n<p>Partner with analysts and investigators to:</p>\n<ul>\n<li>Translate narrative intelligence, case studies, and OSINT into visual formats.</li>\n</ul>\n<ul>\n<li>Standardize how we visualize incidents, metrics, campaigns, and risk clusters.</li>\n</ul>\n<ul>\n<li>Iterate quickly on visual prototypes in response to ongoing investigations or emerging crises.</li>\n</ul>\n<p>Collaborate with data science, Safety Systems, Integrity, Global Affairs, and Product teams to:</p>\n<ul>\n<li>Ingest and align data from multiple internal systems and external sources.</li>\n</ul>\n<ul>\n<li>Ensure visualizations are accurate, timely, and consistent with existing metrics and taxonomies.</li>\n</ul>\n<ul>\n<li>Build views that are directly usable in leadership reviews, safety councils, and partner briefings.</li>\n</ul>\n<p>Establish and maintain a visual &#39;source of truth&#39; for:</p>\n<ul>\n<li>Top strategic risk themes and their evolution over time.</li>\n</ul>\n<ul>\n<li>Cross-geography views (e.g., markets, supply chains, infrastructure) for safety and geopolitical risk.</li>\n</ul>\n<ul>\n<li>Key mitigations and owners linked to specific risk categories or regions.</li>\n</ul>\n<ul>\n<li>Create templates, style guides, and reusable components so that the team can rapidly produce consistent, high-quality visual materials (dashboards, one-pagers, workshop materials, and exec reviews).</li>\n</ul>\n<p><strong>You might thrive in this role if you</strong></p>\n<ul>\n<li>Significant experience (typically 4+ years) in data visualization, business intelligence, or analytics, ideally in security, trust and safety, intelligence, risk, or policy environments.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise with at least one major data visualization / BI stack (e.g., Tableau, Looker, Power BI, Mode, Superset, or equivalent), including building interactive visualizations and dashboards.</li>\n</ul>\n<ul>\n<li>Strong visual design skills and the ability to communicate complex information in a clear and concise manner.</li>\n</ul>\n<ul>\n<li>Excellent collaboration and communication skills, with the ability to work effectively with cross-functional teams.</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills and the ability to adapt to changing priorities and deadlines.</li>\n</ul>\n<ul>\n<li>Experience with data storytelling and the ability to turn complex data into compelling narratives.</li>\n</ul>\n<ul>\n<li>Familiarity with OpenAI&#39;s products and services, or a strong desire to learn about them.</li>\n</ul>\n<ul>\n<li>A passion for using data to drive decision-making and improve outcomes.</li>\n</ul>\n<ul>\n<li>A commitment to delivering high-quality work and meeting deadlines.</li>\n</ul>\n<ul>\n<li>A willingness to learn and grow with the team and the company.</li>\n</ul>\n<p><strong>What we offer</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n</ul>\n<ul>\n<li>Opportunity to work with a talented and diverse team of professionals.</li>\n</ul>\n<ul>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<ul>\n<li>Professional development opportunities and training.</li>\n</ul>\n<ul>\n<li>Flexible work arrangements and remote work options.</li>\n</ul>\n<ul>\n<li>Access to cutting-edge technology and tools.</li>\n</ul>\n<ul>\n<li>Recognition and rewards for outstanding performance.</li>\n</ul>\n<ul>\n<li>A fun and inclusive company culture.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you are a motivated and talented data visualization analyst looking for a new challenge, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a3da0b4-614","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/ae2b93cc-8f6b-4995-9f0e-70e0d1c4d7d1","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$198K – $320K","x-skills-required":["data visualization","business intelligence","analytics","security","trust and safety","intelligence","risk","policy","Tableau","Looker","Power BI","Mode","Superset"],"x-skills-preferred":["data storytelling","visual design","collaboration","communication","problem-solving","adaptability","OpenAI products and services"],"datePosted":"2026-03-06T18:38:16.993Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data visualization, business intelligence, analytics, security, trust and safety, intelligence, risk, policy, Tableau, Looker, Power BI, Mode, Superset, data storytelling, visual design, collaboration, communication, problem-solving, adaptability, OpenAI products and services","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":320000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7b5747bd-067"},"title":"Regulatory Operations Analyst","description":"<p><strong><strong>About the Team</strong></strong></p>\n<p>At OpenAI, our User Operations team safeguards our products and users from legal risk, regulatory non-compliance, fraud, and abuse. The team operates at the intersection of operations, compliance, and user trust, embedded within the broader User Operations organization and collaborating cross-functionally with Legal, Policy, Engineering, Product, and external vendors.</p>\n<p>We support a global and diverse user base across OpenAI’s suite of products ChatGPT, SORA, API, Enterprise offerings, and developer tools by managing sensitive inbound tickets, regulatory obligations, and fraud-related escalations. Every user-facing action is grounded in our commitment to legal integrity, ethical practice, and regulatory excellence.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We are seeking a sharp, adaptive, and operations-minded Regulatory Operations Analyst to help scale and evolve OpenAI’s global compliance support infrastructure.</p>\n<p>In this role, you will own high-sensitivity workflows involving complex global regulatory escalations, trust and safety matters, privacy rights requests, and intellectual property matters. You’ll serve as both a frontline operator and strategic partner, triaging and resolving complex cases while also shaping the systems, documentation, and processes that support OpenAI’s regulatory and compliance goals. You will contribute as a subject-matter expert (SME) on high-stakes escalations, partnering with cross-functional stakeholders to drive fast, defensible outcomes. You will also help design the processes, tooling, and automation that power safe operations at scale.</p>\n<p>This role is essential to building scalable, high-integrity operations that protect user rights, meet our obligations under emerging and current regulations, and reduce OpenAI’s risk exposure. You’ll also contribute to multi-phase transitions and automation efforts that support our long-term operational model.</p>\n<p>We use a hybrid work model of 3 days in the office per week in our Dublin office.</p>\n<p>_<strong>Please note:</strong> This role may involve exposure to sensitive or concerning content, including complaints involving harassment, fraud, or regulatory violations. Strong personal discretion, empathy, and resilience are essential._</p>\n<p><strong><strong>In This Role, You Will:</strong></strong></p>\n<ul>\n<li>Handle and resolve complex user issues involving:</li>\n</ul>\n<ul>\n<li>Trust &amp; Safety incidents</li>\n</ul>\n<ul>\n<li>Regulatory, audit, or compliance inquiries and complaints</li>\n</ul>\n<ul>\n<li>Intellectual property matters (e.g., copyright takedowns, ownership disputes)</li>\n</ul>\n<ul>\n<li>AI governance and regulatory frameworks (e.g., EU AI Act, DSA/OSA)</li>\n<li>Perform risk evaluations and investigations using internal tools, documentation, and third-party data</li>\n</ul>\n<ul>\n<li>Act as incident manager for highly sensitive reviews requiring nuanced interpretation of legal and regulatory standards</li>\n</ul>\n<ul>\n<li>Interface directly with Legal, Privacy, Product, and Support teams to coordinate escalations and resolution paths</li>\n</ul>\n<ul>\n<li>Partner with Legal, Privacy, Policy, and Ops to implement world-class operational workflows for compliance and risk</li>\n</ul>\n<ul>\n<li>Build and maintain tooling, escalation decision trees, playbooks, and knowledge articles</li>\n</ul>\n<ul>\n<li>Contribute to vendor training and governance models, especially during transitions and ramp-up phases</li>\n</ul>\n<ul>\n<li>Lead or participate in cross-functional initiatives that strengthen our regulatory, fraud, and legal infrastructure</li>\n</ul>\n<ul>\n<li>Monitor operational health via case quality audits, SLA tracking, escalation accuracy, and data</li>\n</ul>\n<p><strong><strong>You Might Thrive in This Role If You:</strong></strong></p>\n<ul>\n<li>Have 5+ years of experience in legal operations, regulatory compliance, or trust &amp; safety especially in a global or high-growth tech environment</li>\n</ul>\n<ul>\n<li>Have partnered with in-house counsel, DPOs, or external regulators on audits or escalations</li>\n</ul>\n<ul>\n<li>Understand tiered support structures and have worked with vendor operations at scale</li>\n</ul>\n<ul>\n<li>Bring a structured, systems-first mindset to operational governance and risk evaluation</li>\n</ul>\n<ul>\n<li>Communicate clearly, empathetically, and effectively especially in writing responses to sensitive issues</li>\n</ul>\n<ul>\n<li>Operate well in ambiguity and can manage multiple priorities simultaneously with speed and precision</li>\n</ul>\n<p>Thrive in high-autonomy environments and hold a high bar for ownership and integrity</p>\n<p><strong><strong>About OpenAI</strong></strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7b5747bd-067","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/a236b8f1-e5d1-494e-ad76-f6027935fafb","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Regulatory compliance","Trust and safety","Intellectual property","AI governance","Risk evaluation","Incident management","Communication","Operational governance","Vendor training","Cross-functional collaboration"],"x-skills-preferred":["Legal operations","Policy development","Process improvement","Data analysis","Project management"],"datePosted":"2026-03-06T18:37:13.564Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Regulatory compliance, Trust and safety, Intellectual property, AI governance, Risk evaluation, Incident management, Communication, Operational governance, Vendor training, Cross-functional collaboration, Legal operations, Policy development, Process improvement, Data analysis, Project management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b51f695-3f8"},"title":"Trust & Safety Operations Analyst","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Trust &amp; Safety Operations Analyst</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$189K – $280K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>At OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.</p>\n<p>We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We are seeking experienced, senior-level analysts who specialize in one or more of the following areas:</p>\n<ul>\n<li><strong>Content Integrity &amp; Scaled Enforcement</strong> – Detecting, reviewing, and acting on policy violations, harmful content, and emerging abuse patterns at scale.</li>\n</ul>\n<ul>\n<li><strong>Emerging Risk Operations</strong> – Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape.</li>\n</ul>\n<p>In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations.</p>\n<p>We use a hybrid work model of 3 days in the San Francisco office per week and offer relocation assistance to new employees.</p>\n<p>Please note: This role may involve exposure to sensitive content, including material that is sexual, violent, or otherwise disturbing.</p>\n<p><strong>In This Role, You Will:</strong></p>\n<ul>\n<li>Handle and resolve high-priority cases in your area of specialization (scaled content enforcement, fraud/scams, privacy/regulatory, or emerging risks).</li>\n</ul>\n<ul>\n<li>Perform in-depth risk evaluations and investigations using internal tools, product signals, and third-party data.</li>\n</ul>\n<ul>\n<li>Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation.</li>\n</ul>\n<ul>\n<li>Partner with cross-functional teams to design and implement world-class operational workflows, decision trees, and automation strategies.</li>\n</ul>\n<ul>\n<li>Build feedback loops from casework to inform product, engineering, and policy improvements.</li>\n</ul>\n<ul>\n<li>Develop and maintain playbooks, SOPs, macros, and knowledge resources for internal teams and vendors.</li>\n</ul>\n<ul>\n<li>Lead or contribute to cross-functional projects, from zero-to-one process builds to global operational scale-ups.</li>\n</ul>\n<ul>\n<li>Monitor operational health through case quality audits, SLA adherence, escalation accuracy, and user satisfaction metrics.</li>\n</ul>\n<ul>\n<li>Train and support vendor teams, ensuring consistent quality and alignment with OpenAI’s trust and safety standards.</li>\n</ul>\n<p><strong>You Might Thrive in This Role If You:</strong></p>\n<ul>\n<li>Have 5+ years of experience in one or more of: trust &amp; safety, fraud prevention, scam investigation, privacy/legal operations, compliance, or other risk/integrity domains ideally in a global or high-growth tech environment.</li>\n</ul>\n<ul>\n<li>Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact.</li>\n</ul>\n<ul>\n<li>Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks.</li>\n</ul>\n<ul>\n<li>Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes.</li>\n</ul>\n<ul>\n<li>Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence.</li>\n</ul>\n<ul>\n<li>Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts.</li>\n</ul>\n<ul>\n<li>Thrive in ambiguous, high-autonomy environments and balance speed with diligence.</li>\n</ul>\n<p>Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact.</p>\n<p><strong><strong>What We Offer</strong></strong></p>\n<ul>\n<li>Competitive salary and equity package</li>\n</ul>\n<ul>\n<li>Comprehensive benefits, including medical, dental, and vision insurance</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave and medical and caregiver leave</li>\n</ul>\n<ul>\n<li>Flexible PTO and paid company holidays</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend</li>\n</ul>\n<ul>\n<li>Daily meals in our offices and meal delivery credits</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends</li>\n</ul>\n<p><strong><strong>How to Apply</strong></strong></p>\n<p>If you are a motivated and experienced professional looking to join a dynamic team, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b51f695-3f8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/eb54b316-26fb-498f-a68c-9990ff9c402c","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$189K – $280K","x-skills-required":["trust & safety","fraud prevention","scam investigation","privacy/legal operations","compliance","risk/integrity domains","OpenAI technology","workflow management","decision-making","operational impact","domain expertise","legal","policy","technical frameworks","analytical skills","pattern detection","risk assessment","policy/product changes","communication","clarity","empathy","precision","user-facing contexts","ambiguous environments","high-autonomy","speed","diligence","context switching","project management","prioritization"],"x-skills-preferred":["ChatGPT","API","enterprise offerings","developer tools","sensitive content","sexual","violent","disturbing","hybrid work model","relocation assistance","vendor management","quality control","alignment","OpenAI’s trust and safety standards"],"datePosted":"2026-03-06T18:34:54.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust & safety, fraud prevention, scam investigation, privacy/legal operations, compliance, risk/integrity domains, OpenAI technology, workflow management, decision-making, operational impact, domain expertise, legal, policy, technical frameworks, analytical skills, pattern detection, risk assessment, policy/product changes, communication, clarity, empathy, precision, user-facing contexts, ambiguous environments, high-autonomy, speed, diligence, context switching, project management, prioritization, ChatGPT, API, enterprise offerings, developer tools, sensitive content, sexual, violent, disturbing, hybrid work model, relocation assistance, vendor management, quality control, alignment, OpenAI’s trust and safety standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_14fdcb43-401"},"title":"Data Scientist, Strategic Intelligence & Risk","description":"<p><strong>Data Scientist, Strategic Intelligence &amp; Risk</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to OpenAI&#39;s overarching goal of developing AI that benefits humanity.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Data Scientist, you will lead econometric and experimental analysis to understand how risk changes in complex human–AI systems. Your work will focus on measuring the magnitude and impact of risk shifts in a fast-paced, rapidly evolving operational environment. You will design experiments and observational studies to identify causal drivers and analyze changes in risk across a wide range of surfaces and sources. Your analyses will directly inform prioritization and strategic risk management across the company.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Own the design and execution of experimental and observational analyses used to assess strategic risk</li>\n</ul>\n<ul>\n<li>Develop econometric approaches to estimate the impact of product, policy, and external developments on key risk vectors</li>\n</ul>\n<ul>\n<li>Translate strategic risk questions into testable hypotheses and sound study designs</li>\n</ul>\n<ul>\n<li>Design and deploy A/B tests, as well as pseudo-experimental studies, to measure changes in risks and understand underlying mechanisms</li>\n</ul>\n<ul>\n<li>Identify, test, and explain product-driven, event-driven, or signal-driven changes in risk</li>\n</ul>\n<ul>\n<li>Establish baselines and statistical confidence around core metrics to size these problems</li>\n</ul>\n<ul>\n<li>Partner across teams to track strategic risks, identify opportunities for intervention, and develop analyses to evaluate those interventions</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 3–6+ years in econometrics, causal inference, or experimental research</li>\n</ul>\n<ul>\n<li>Are comfortable owning ambiguous analyses with large-scale influence</li>\n</ul>\n<ul>\n<li>Are strong in experimental design, observational methods, and statistical reasoning</li>\n</ul>\n<ul>\n<li>Write solid Python and SQL</li>\n</ul>\n<ul>\n<li>Experience delivering zero-to-one analyses and scaling them from concept through deployment</li>\n</ul>\n<ul>\n<li>Communicate data-driven findings clearly, including uncertainty and trade-offs, to non-technical partners and leadership</li>\n</ul>\n<ul>\n<li>Nice to have: experience in trust and safety, integrity, operational security, intelligence analysis or other quantitative risk-focused domains</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_14fdcb43-401","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/6131fc4f-bfc8-49f3-8223-773a55d15583","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $325K • Offers Equity","x-skills-required":["econometrics","causal inference","experimental research","Python","SQL","experimental design","observational methods","statistical reasoning"],"x-skills-preferred":["trust and safety","integrity","operational security","intelligence analysis"],"datePosted":"2026-03-06T18:34:25.237Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"econometrics, causal inference, experimental research, Python, SQL, experimental design, observational methods, statistical reasoning, trust and safety, integrity, operational security, intelligence analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c8b3f26f-c2d"},"title":"Software Engineer, Privacy","description":"<p><strong>Software Engineer, Privacy</strong></p>\n<p>We’re in search of a Software Engineer with experience building data pipelines and working closely with members of the Legal team. This role is perfect for someone who&#39;s passionate about the intersection of systems, privacy, and legal compliance. You will architect, design, and write backend systems responsible for handling some of the most sensitive data at OpenAI.</p>\n<p>This role is based in Dublin, Ireland. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, build, and maintain back-end systems and services that power privacy and data compliance functions within our API products and consumer applications.</li>\n</ul>\n<ul>\n<li>Work closely with legal advisors and other engineers to respond to court orders and other legal processes, all while upholding strict data privacy and legal standards.</li>\n</ul>\n<ul>\n<li>Identify opportunities for automation and build the tools that enable other teams to automate tasks involving customer data.</li>\n</ul>\n<ul>\n<li>Develop and implement data handling policies and procedures in compliance with legal and ethical standards, ensuring the integrity and confidentiality of user data.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience building data pipelines, especially for legal processes and investigative workflows.</li>\n</ul>\n<ul>\n<li>Can translate legal requirements into technical solutions and explain technical solutions to a non-technical audience.</li>\n</ul>\n<ul>\n<li>Take responsibility for problems from beginning to end, and are prepared to acquire any missing knowledge necessary to get the job done.</li>\n</ul>\n<ul>\n<li>Create tools to speed up your own and your colleagues’ workflows, particularly when pre-existing solutions are inadequate.</li>\n</ul>\n<ul>\n<li>Deeply care about user experience and take pride in developing products that meet customer needs while ensuring privacy.</li>\n</ul>\n<ul>\n<li>Have a background in security investigations or experience working in collaboration with trust and safety, legal, and engineering teams.</li>\n</ul>\n<p><strong>Compensation, Benefits and Perks</strong></p>\n<p>This is a position with OpenAI Ireland Ltd., which controls the hiring and management of this position.</p>\n<p>Total compensation includes an annual salary, generous equity, and benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>PRSA plan with 6% employer matching</li>\n</ul>\n<ul>\n<li>Unlimited time off</li>\n</ul>\n<ul>\n<li>Annual learning &amp; development stipend (€1,400 per year)</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c8b3f26f-c2d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/fb5862cc-244c-410b-b287-47df89ad1e43","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Total compensation includes an annual salary, generous equity, and benefits.","x-skills-required":["data pipelines","legal processes","investigative workflows","backend systems","services","API products","consumer applications","court orders","legal processes","data handling policies","procedures","legal and ethical standards","user data","security investigations","trust and safety","legal","engineering teams"],"x-skills-preferred":["data pipelines","legal processes","investigative workflows","backend systems","services","API products","consumer applications","court orders","legal processes","data handling policies","procedures","legal and ethical standards","user data","security investigations","trust and safety","legal","engineering teams"],"datePosted":"2026-03-06T18:30:20.291Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data pipelines, legal processes, investigative workflows, backend systems, services, API products, consumer applications, court orders, legal processes, data handling policies, procedures, legal and ethical standards, user data, security investigations, trust and safety, legal, engineering teams, data pipelines, legal processes, investigative workflows, backend systems, services, API products, consumer applications, court orders, legal processes, data handling policies, procedures, legal and ethical standards, user data, security investigations, trust and safety, legal, engineering teams"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_77403985-6da"},"title":"Data Scientist, Integrity Measurement","description":"<p><strong>Data Scientist, Integrity Measurement</strong></p>\n<p><strong>Location</strong></p>\n<p>London, UK</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Data Science</p>\n<p>The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability</p>\n<p>The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. As the systems that address some of our most severe usage harms become more mature, we’re adding data scientists to help us measure robustly the prevalence of these problems and the quality of our response to them.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses.</p>\n<p>This role is based out of our London office and may involve resolving urgent escalations outside of normal work hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise-disturbing material.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals</li>\n</ul>\n<ul>\n<li>develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, which may necessarily include off-platform indicators or other non-standard datasets</li>\n</ul>\n<ul>\n<li>build metrics that can be used for goaling or A/B tests when prevalence or other top line metrics are not suitable</li>\n</ul>\n<ul>\n<li>own dashboards and metrics reporting for harm verticals</li>\n</ul>\n<ul>\n<li>conduct analyses and generate insights that inform improvements to review, detection, or enforcement, and that influence roadmaps</li>\n</ul>\n<ul>\n<li>optimise LLM prompts for the purpose of measurement</li>\n</ul>\n<ul>\n<li>collaborate w/ other safety teams to understand key safety concerns and create relevant policies that will support safety needs</li>\n</ul>\n<ul>\n<li>provide metrics for leadership and external reporting</li>\n</ul>\n<ul>\n<li>develop automation to scale yourself, leveraging our agentic products</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>are a senior DS with trust and safety experience that can drive measurement direction</li>\n</ul>\n<ul>\n<li>have deep statistics skills, specifically around sampling methods and prevalence estimation of complicated problem areas (ideally activity- rather than content-based)</li>\n</ul>\n<ul>\n<li>have experience working with severe and sensitive harm areas like child safety or violence</li>\n</ul>\n<ul>\n<li>are an excellent communicator, and have strong cross-functional collaboration skills</li>\n</ul>\n<ul>\n<li>are capable in data programming languages (R or python, SQL)</li>\n</ul>\n<ul>\n<li>(ideally) have experience with AI harms or leveraging AI for measurement</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_77403985-6da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/a7a326e7-6718-42ab-a4d8-b16d5021c99b","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["trust and safety experience","statistics skills","sampling methods","prevalence estimation","data programming languages (R or python, SQL)","AI harms or leveraging AI for measurement"],"x-skills-preferred":["cross-functional collaboration skills","excellent communication skills"],"datePosted":"2026-03-06T18:26:49.156Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety experience, statistics skills, sampling methods, prevalence estimation, data programming languages (R or python, SQL), AI harms or leveraging AI for measurement, cross-functional collaboration skills, excellent communication skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_700b3e35-b5b"},"title":"Software Engineer, Integrity Foundations","description":"<p><strong>Software Engineer, Integrity Foundations - London</strong></p>\n<p><strong>Location</strong></p>\n<p>London, UK</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$221K – $370K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the team</strong></strong></p>\n<p>The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability</p>\n<p>The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. We are creating a 0→1 team in London to architect next-generation systems that will support, leverage, and scale the work of experts in harms committed with our technology.</p>\n<p><strong><strong>In this role, you will:</strong></strong></p>\n<ul>\n<li>Design and build scaled foundational systems used in the detection, tracking, and enforcement of harm using our technologies.</li>\n</ul>\n<ul>\n<li>Work closely with and learn from technical and non-technical experts on harms we are observing, and that stem from our platforms, to inform your designs and implementation.</li>\n</ul>\n<ul>\n<li>Leverage OpenAI’s most advanced technologies to automate and augment our abilities to detect and reason about complex harms quickly, accurately, and with minimum human intervention.</li>\n</ul>\n<ul>\n<li>Collaborate with policy, trust and safety operations, legal, investigations, and harm-specialised engineers and data scientists to holistically combat abusive actors and customers using OpenAI’s technology.</li>\n</ul>\n<ul>\n<li>Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries.</li>\n</ul>\n<p><strong><strong>You might thrive in this role if you:</strong></strong></p>\n<ul>\n<li>Have at least 5 years of software engineering experience in backend and data systems.</li>\n</ul>\n<ul>\n<li>Have at least 2 years experience in trust and safety analysis, investigation, and/or operations.</li>\n</ul>\n<ul>\n<li>Are excited to learn from top experts in the world on harms committed with AI, and to collaborate in an interdisciplinary team including technical and non-technical roles to combat these harms.</li>\n</ul>\n<ul>\n<li>Can dive into our codebase, intuit how it works, and be able to have a strong intuition for suggestions that will lead us to a stronger engineering position.</li>\n</ul>\n<ul>\n<li>Have a voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with others</li>\n</ul>\n<ul>\n<li>Are comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessary.</li>\n</ul>\n<ul>\n<li>Experience in Machine Learning techniques is a plus, but not required.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_700b3e35-b5b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/46703db4-6023-4ac6-93a8-22dc95009945","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$221K – $370K • Offers Equity","x-skills-required":["software engineering","backend and data systems","trust and safety analysis","investigation","operations","Machine Learning techniques"],"x-skills-preferred":["AI research","deployment","data science","engineering"],"datePosted":"2026-03-06T18:24:45.152Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, backend and data systems, trust and safety analysis, investigation, operations, Machine Learning techniques, AI research, deployment, data science, engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":221000,"maxValue":370000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3fa5807c-2f8"},"title":"Data Scientist, Integrity Measurement","description":"<p><strong>Data Scientist, Integrity Measurement</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco; New York City</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Data Science</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Own measurement and quantitative analysis for a group of severe, actor- and network-based usage harm verticals.</li>\n</ul>\n<ul>\n<li>Develop and implement AI-first methods for prevalence measurement and other productionised safety metrics, which may necessarily include off-platform indicators or other non-standard datasets.</li>\n</ul>\n<ul>\n<li>Build metrics that can be used for goaling or A/B tests when prevalence or other top line metrics are not suitable.</li>\n</ul>\n<ul>\n<li>Own dashboards and metrics reporting for harm verticals.</li>\n</ul>\n<ul>\n<li>Conduct analyses and generate insights that inform improvements to review, detection, or enforcement, and that influence roadmaps.</li>\n</ul>\n<ul>\n<li>Optimise LLM prompts for the purpose of measurement.</li>\n</ul>\n<ul>\n<li>Collaborate w/ other safety teams to understand key safety concerns and create relevant policies that will support safety needs.</li>\n</ul>\n<ul>\n<li>Provide metrics for leadership and external reporting.</li>\n</ul>\n<ul>\n<li>Develop automation to scale yourself, leveraging our agentic products.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Are a senior DS with trust and safety experience that can drive measurement direction.</li>\n</ul>\n<ul>\n<li>Have deep statistics skills, specifically around sampling methods and prevalence estimation of complicated problem areas (ideally activity- rather than content-based).</li>\n</ul>\n<ul>\n<li>Have experience working with severe and sensitive harm areas like child safety or violence.</li>\n</ul>\n<ul>\n<li>Are an excellent communicator, and have strong cross-functional collaboration skills.</li>\n</ul>\n<ul>\n<li>Are capable in data programming languages (R or python, SQL).</li>\n</ul>\n<ul>\n<li>(Ideally) have experience with AI harms or leveraging AI for measurement.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3fa5807c-2f8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/be4e1098-f7ac-46f4-babe-44ef08f47fcb","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $385K • Offers Equity","x-skills-required":["trust and safety experience","statistics skills","sampling methods","prevalence estimation","data programming languages (R or python, SQL)","AI harms or leveraging AI for measurement"],"x-skills-preferred":["excellent communicator","strong cross-functional collaboration skills"],"datePosted":"2026-03-06T18:24:05.836Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; New York City"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety experience, statistics skills, sampling methods, prevalence estimation, data programming languages (R or python, SQL), AI harms or leveraging AI for measurement, excellent communicator, strong cross-functional collaboration skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5b317b7-cd0"},"title":"Senior AI Red Team Analyst","description":"<p>As a Senior AI Red Team Analyst at Epic Games, you will be instrumental in protecting our gaming ecosystem by identifying and mitigating trust and safety risks in AI-driven features. Your work will ensure that our games remain safe, inclusive, and enjoyable for players by proactively addressing potential abuses of our content rules and our community rules.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Take a leadership role in developing, prototyping, and teaching novel red teaming techniques and trust and safety methodologies to enhance team capabilities</li>\n<li>Investigate and understand how adversarial attacks, such as prompt injections, data poisoning, or bias exploitation, could manifest in Epic’s products</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of experience conducting investigations or red teaming in fields such as cybersecurity, AI ethics, trust and safety, or related areas</li>\n<li>Proven ability to develop multi-source, evidence-based findings and communicate them effectively to technical and non-technical stakeholders</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5b317b7-cd0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5678363004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$170,135—$283,558 USD (New York City Base Pay Range)","x-skills-required":["investigations","red teaming","cybersecurity","AI ethics","trust and safety"],"x-skills-preferred":["data analysis","Python","SQL","AI governance","ethical AI frameworks"],"datePosted":"2026-03-05T21:07:45.831Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Multiple Locations"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"investigations, red teaming, cybersecurity, AI ethics, trust and safety, data analysis, Python, SQL, AI governance, ethical AI frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170135,"maxValue":283558,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_94a0ba0d-d03"},"title":"Senior Product Manager, Moderation Automation","description":"<p>Epic Games is seeking a passionate and experienced Senior Product Manager for our Machine Learning Solutions team. In this role, you will be responsible for owning the end-to-end development and implementation of strategies and products that ensure a safe and secure environment for our players across all Epic Games platforms.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Support specific product visions and strategy for Machine Learning Solutions at Epic Games.</li>\n<li>Take full ownership of specific moderation automation initiatives, driving projects from inception through to delivery and evaluation.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of experience in product management, with a focus on working with engineering (ideally machine learning) teams, and particularly in the trust and safety domain.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_94a0ba0d-d03","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5684673004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["product management","machine learning","trust and safety"],"x-skills-preferred":["data analysis","project management","communication skills"],"datePosted":"2026-01-08T03:20:46.873Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Product Management","industry":"Technology","skills":"product management, machine learning, trust and safety, data analysis, project management, communication skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_08601cec-e8b"},"title":"Senior AI Red Team Analyst","description":"<p>As a Trust and Safety AI Red Team Analyst at Epic Games, you will be instrumental in protecting our gaming ecosystem by identifying and mitigating trust and safety risks in AI-driven features. Your work will ensure that our games remain safe, inclusive, and enjoyable for players by proactively addressing potential abuses of our content rules and our community rules.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Take a leadership role in developing, prototyping, and teaching novel red teaming techniques and trust and safety methodologies to enhance team capabilities</li>\n<li>Investigate and understand how adversarial attacks, such as prompt injections, data poisoning, or bias exploitation, could manifest in Epic’s products</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of experience conducting investigations or red teaming in fields such as cybersecurity, AI ethics, trust and safety, or related areas</li>\n<li>Proven ability to develop multi-source, evidence-based findings and communicate them effectively to technical and non-technical stakeholders</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_08601cec-e8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5678361004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["investigations","red teaming","cybersecurity","AI ethics","trust and safety"],"x-skills-preferred":["data analysis","Python","SQL"],"datePosted":"2026-01-08T03:15:30.422Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"investigations, red teaming, cybersecurity, AI ethics, trust and safety, data analysis, Python, SQL"}]}