{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/content-moderation"},"x-facet":{"type":"skill","slug":"content-moderation","display":"Content Moderation","count":21},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25cacbc0-046"},"title":"Senior Analyst, Legal Operations","description":"<p>We are seeking a skilled Legal Operations Senior Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>You may represent X in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>5+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25cacbc0-046","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090690007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:57:37.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7cc85573-4a2"},"title":"Technical Policy Manager, Cyber Harms","description":"<p>We are seeking a Technical Policy Manager, Cyber Harms to lead our efforts to prevent AI misuse in the cyber domain. As a member of our Safeguards team, you will be responsible for designing and overseeing the execution of capability evaluations to assess the cyber-relevant capabilities of new models. You will also create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques.</p>\n<p>This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge,balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.</p>\n<p>In this role, you will lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks. You will serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies.</p>\n<p>You will collaborate closely with internal and external threat modeling experts to develop training data for safety systems, and with ML engineers to train these systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers.</p>\n<p>You will also analyze safety system performance in traffic, identifying gaps and proposing improvements. You will conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks.</p>\n<p>You will develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces. You will partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle.</p>\n<p>You will translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies. You will contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety.</p>\n<p>You will monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these.</p>\n<p>You will mentor and develop team members, fostering a culture of technical excellence and responsible AI development.</p>\n<p>To be successful in this role, you will need to have:</p>\n<ul>\n<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>\n<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>\n<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>\n<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>\n<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>\n<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>\n<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>\n<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>\n<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>\n<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>\n<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>\n<li>Track record of translating specialized technical knowledge into actionable safety policies or enforcement guidelines</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Background in AI/ML systems, particularly experience with large language models</li>\n<li>Experience developing ML-based security systems or adversarial ML research</li>\n<li>Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)</li>\n<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>\n<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>\n<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>\n<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7cc85573-4a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066981008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Cybersecurity","Vulnerability research","Exploit development","Network security","Malware analysis","Penetration testing","Detection","Monitoring","Incident response","Scientific computing","Data analysis","Programming (Python)","Responsible disclosure practices","Vulnerability coordination","Cybersecurity frameworks (MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)"],"x-skills-preferred":["AI/ML systems","Large language models","ML-based security systems","Adversarial ML research","Defense, intelligence, or security organizations","Published security research","Disclosed vulnerabilities","Bug bounty programs","Trust & Safety operations","Content moderation at scale","Certifications (OSCP, OSCE, GXPN)"],"datePosted":"2026-04-18T15:56:47.739Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cybersecurity, Vulnerability research, Exploit development, Network security, Malware analysis, Penetration testing, Detection, Monitoring, Incident response, Scientific computing, Data analysis, Programming (Python), Responsible disclosure practices, Vulnerability coordination, Cybersecurity frameworks (MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, Large language models, ML-based security systems, Adversarial ML research, Defense, intelligence, or security organizations, Published security research, Disclosed vulnerabilities, Bug bounty programs, Trust & Safety operations, Content moderation at scale, Certifications (OSCP, OSCE, GXPN)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d14ace5b-870"},"title":"Legal Operations Analyst","description":"<p>Job Description:</p>\n<p>We are seeking a skilled Legal Operations Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>Represent xAI in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>2+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d14ace5b-870","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5101856007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:56:23.289Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore, SG"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb74277e-4ee"},"title":"Policy Design Manager, Age-Appropriate Design","description":"<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>\n<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>\n<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Serve as an internal subject matter expert, leveraging deep expertise in child safety, adult content, youth development, and age-appropriate design to:</li>\n</ul>\n<ul>\n<li>Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases</li>\n</ul>\n<ul>\n<li>Design evaluation frameworks for testing model performance in areas of expertise</li>\n</ul>\n<ul>\n<li>Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities</li>\n</ul>\n<ul>\n<li>Review flagged content to drive enforcement and policy improvements</li>\n</ul>\n<ul>\n<li>Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review</li>\n</ul>\n<ul>\n<li>Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions for users across different age groups</li>\n</ul>\n<ul>\n<li>Advise on age assurance approaches and content classification frameworks in partnership with Enforcement, Product, Engineering, and Legal teams</li>\n</ul>\n<ul>\n<li>Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s)</li>\n</ul>\n<ul>\n<li>Keep up to date with new and existing AI policy norms, regulatory requirements (e.g., age-appropriate design codes), and industry standards, and use these to inform our decision-making on policy areas</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</li>\n</ul>\n<ul>\n<li>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions</li>\n</ul>\n<ul>\n<li>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems</li>\n</ul>\n<ul>\n<li>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development</li>\n</ul>\n<ul>\n<li>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams</li>\n</ul>\n<ul>\n<li>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space</li>\n</ul>\n<ul>\n<li>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations</li>\n</ul>\n<ul>\n<li>Navigating and prioritizing work efforts amidst ambiguity</li>\n</ul>\n<p><strong>Salary:</strong></p>\n<p>$245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb74277e-4ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156326008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["child safety","age assurance","content classification","adult sexual content","policy development","enforcement guidelines","safety interventions","generative AI products","classifier development","product policy decisions","content moderation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:48.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI products, classifier development, product policy decisions, content moderation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86d4c902-c89"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86d4c902-c89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis tools","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","non-consensual intimate imagery","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem working on these harms","open-source investigations or threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:52:37.777Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a8560685-535"},"title":"Staff Product Manager, AI Safety","description":"<p>We&#39;re seeking a Staff Product Manager to join our GenAI Safety team within Trust &amp; Safety. As a key member of our team, you will define and drive the product strategy for ensuring Pinterest&#39;s GenAI-powered systems are safe, fair, and trustworthy.</p>\n<p>As a Staff Product Manager, you will be responsible for building proactive safety frameworks that scale with our growing AI capabilities, partnering deeply with engineering, policy, data science, and design to protect our users while enabling Pinterest to innovate responsibly.</p>\n<p>This is a high-impact role for someone who is passionate about the intersection of AI and user safety, and who thrives in ambiguous, fast-evolving problem spaces. You will work at the frontier of responsible AI - anticipating novel harms before they emerge, red-teaming new AI features, and translating complex policy goals into measurable product requirements.</p>\n<p>Key responsibilities will include:</p>\n<ul>\n<li>Defining and driving the product roadmap for GenAI safety across Pinterest&#39;s AI-powered surfaces</li>\n<li>Leading proactive identification of risks, failure modes, and adversarial attack vectors across AI systems</li>\n<li>Designing structured red-teaming exercises and evaluation frameworks before and after product launches</li>\n<li>Partnering closely with Trust &amp; Safety policy, legal, and ethics teams to translate nuanced content guidelines into precise, buildable product requirements and model guardrails</li>\n<li>Working with engineering, ML, design, data science, policy, legal, comms, and operations teams to define, align, and ship AI safety solutions across global markets and diverse user populations</li>\n<li>Defining and tracking quantitative safety metrics, including fairness audits, false positive/negative rates, disparate impact analysis, and content harm reduction</li>\n<li>Developing and maintaining AI safety incident runbooks and escalation frameworks, and leading rapid triage and remediation when AI systems produce harmful or unexpected outputs</li>\n</ul>\n<p>If you&#39;re passionate about the intersection of AI and user safety, and thrive in ambiguous, fast-evolving problem spaces, we encourage you to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a8560685-535","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7718015","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$164,695-$339,078 USD","x-skills-required":["GenAI/ML","Trust & Safety","Content Moderation","Responsible AI","AI Ethics Frameworks","Regulatory Landscapes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:47.687Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI/ML, Trust & Safety, Content Moderation, Responsible AI, AI Ethics Frameworks, Regulatory Landscapes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":164695,"maxValue":339078,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6e3c80cb-f5e"},"title":"Director, Public Policy","description":"<p>We&#39;re looking for an experienced public policy leader to serve as Head of Government Affairs, leading our engagement with government stakeholders across North America. This is a high-impact, high-visibility role at the intersection of technology, policy, and purpose, and a rare opportunity to shape how one of the world&#39;s most positively differentiated platforms navigates an evolving political and regulatory landscape.</p>\n<p>Based in Washington, D.C., you will be the architect of Pinterest&#39;s public policy strategy across multiple jurisdictions. You will build and steward the relationships, coalitions, and campaigns necessary to educate policymakers about Pinterest&#39;s unique platform, advocate for sound policy outcomes, and ensure the company is recognised as a trusted and responsible technology leader.</p>\n<p>The ideal candidate brings broad and deep experience navigating public policy issues pertaining to technology, internet platforms, and digital innovation. The role is ideally suited for a proactive, results-oriented leader who thrives in a fast-paced environment, and is equally comfortable diving into the substance of complex policy challenges, and building the strategic relationships needed to advance Pinterest&#39;s policy priorities and protect the company&#39;s interests.</p>\n<p>Above all, you bring the conviction that thoughtful policy engagement can help build a better, more positive internet.</p>\n<p>As the Head of Government Affairs, you will:</p>\n<ul>\n<li>Lead our public policy efforts and advocacy in North America. You will develop and execute our government affairs strategy at the country, state, and local levels and to address the impact of proposals on Pinterest’s products, operations, growth, and strategic initiatives.</li>\n</ul>\n<ul>\n<li>Build trusted relationships with policymakers, issue experts, trade associations, and industry partners to mobilise support on key issues that impact Pinterest’s ability to achieve its mission and support our users.</li>\n</ul>\n<ul>\n<li>Navigate a wide range of technology policy areas, including data privacy, AI governance, content moderation, online safety, children&#39;s digital wellbeing, competition/antitrust, intellectual property, and digital advertising.</li>\n</ul>\n<ul>\n<li>Advise colleagues, as well as senior management and cross-functional leaders, about relevant legislative issues and strategies to inform policy development and deliver effective solutions and valuable outcomes.</li>\n</ul>\n<ul>\n<li>Proactively collaborate with external constituencies on a range of issues that matter to Pinterest to ensure our team is thoughtfully engaging on various challenges that could impact our ability to serve Pinners.</li>\n</ul>\n<ul>\n<li>Drive cross functional alignment by partnering with colleagues across Legal, Communications, Trust &amp; Safety, Product, and other teams on integrated advocacy campaigns, strategic positioning, and external engagement that aligns with company objectives across priority jurisdictions.</li>\n</ul>\n<ul>\n<li>Deploy an AI-enabled policy function to track legislative and regulatory developments, map stakeholders, and generate engagement strategies and policy positioning to disseminate actionable recommendations across the organisation.</li>\n</ul>\n<ul>\n<li>Maintain a knowledge base leveraging AI-driven tagging and retrieval mechanisms to ensure consistent, high-quality advocacy materials and to facilitate the development of tailored messaging for various stakeholders.</li>\n</ul>\n<p>What we’re looking for:</p>\n<ul>\n<li>12+ years of progressive experience in government affairs, public policy, or legislative/regulatory roles, with significant focus on the technology sector.</li>\n</ul>\n<ul>\n<li>Comprehensive knowledge of internet platform policy, digital innovation, and the regulatory frameworks shaping the tech industry.</li>\n</ul>\n<ul>\n<li>Excellent communication and public speaking skills and a compelling storyteller who can distill complex technical and policy concepts for diverse audiences, from Capitol Hill to the C-Suite.</li>\n</ul>\n<ul>\n<li>Exceptional strategic acumen with the ability to see around corners, anticipate policy shifts, and develop proactive strategies that protect and advance business interests.</li>\n</ul>\n<ul>\n<li>A strong track record of developing and leading execution of public policy campaigns to address public policy challenges and achieve high-level objectives.</li>\n</ul>\n<ul>\n<li>The ability to work independently and develop and maintain relationships across the company, while working remotely from Washington, DC.</li>\n</ul>\n<ul>\n<li>Demonstrated capacity to operationalise AI tools for policy intelligence, drafting workflows, stakeholder mapping, and issue management.</li>\n</ul>\n<ul>\n<li>You thrive in an environment with changing needs and variability across work and issues week to week.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree in a relevant field such as political science, government or equivalent experience. JD, MPP/MPA also welcome, but not required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6e3c80cb-f5e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7537549","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$178,561-$367,626 USD","x-skills-required":["public policy","government affairs","legislative affairs","regulatory affairs","policy analysis","stakeholder engagement","coalition building","advocacy","strategic planning","AI governance","data privacy","content moderation","online safety","children's digital wellbeing","competition/antitrust","intellectual property","digital advertising"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:04.055Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, D.C, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Government Affairs","industry":"Technology","skills":"public policy, government affairs, legislative affairs, regulatory affairs, policy analysis, stakeholder engagement, coalition building, advocacy, strategic planning, AI governance, data privacy, content moderation, online safety, children's digital wellbeing, competition/antitrust, intellectual property, digital advertising","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":178561,"maxValue":367626,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e3b1c38b-ef1"},"title":"Staff Software Engineer, Communication Products","description":"<p>Job Title: Staff Software Engineer, Communication Products</p>\n<p>We are seeking a highly skilled and experienced Staff Software Engineer to join our Communication Products team. As a Staff Engineer, you will be responsible for leading the technical vision for ML-powered messaging features, architecting and delivering intelligent capabilities end-to-end, and partnering deeply with ML and product teams.</p>\n<p>The Difference You Will Make:</p>\n<p>As a Staff Engineer on the team, you will define and drive the technical strategy for integrating ML capabilities into Airbnb&#39;s messaging products, including smart replies, message classification, content moderation, translation, and conversational assistance. You will also own the full lifecycle of ML-powered features: from prototyping and experimentation through launch, monitoring, and iteration.</p>\n<p>A Typical Day:</p>\n<ul>\n<li>Design, build, and operate the systems that serve ML models within the messaging stack, with a focus on latency, reliability, and scalability</li>\n<li>Write and review technical designs that solve large, open-ended problems at the intersection of ML and product engineering without clearly-known solutions</li>\n<li>Partner with ML, data science, and product teams to identify high-value opportunities, establish evaluation criteria, and close the gap between offline model performance and production impact</li>\n<li>Collaborate with other engineers and cross-functional partners across Messaging, Trust &amp; Safety, Localization, and Platform organizations to align on long-term technical solutions</li>\n<li>Mentor, guide, advocate, and support the career growth of individual contributors</li>\n<li>Establish engineering standards for ML integration across the messaging surface, including feature flagging, A/B testing, observability, and graceful degradation</li>\n</ul>\n<p>Your Expertise:</p>\n<ul>\n<li>9+ years of relevant engineering hands-on work experience</li>\n<li>Bachelors, Masters, or PhD in CS or related field</li>\n<li>Demonstrated experience building and shipping ML-powered product features in production environments, including model serving, feature pipelines, online/offline evaluation, and monitoring</li>\n<li>Exceptional architecture abilities and experience with architectural patterns of large, high-scale applications</li>\n<li>Familiarity with NLP/NLU techniques and large language models, particularly as applied to messaging, conversational AI, or content understanding</li>\n<li>Shipped several large-scale projects with multiple dependencies across teams, specifically at the intersection of ML infrastructure and product engineering</li>\n<li>Technical leadership and strong communication skills with the ability to translate between ML research, product goals, and engineering execution</li>\n<li>Experience operating distributed, real-time systems at scale with high reliability requirements</li>\n<li>Experience with real-time messaging systems or event-driven architectures</li>\n<li>Familiarity with ML infrastructure at scale (e.g., feature stores, model registries, online inference platforms)</li>\n<li>Prior work on trust &amp; safety, content moderation, or internationalization in a messaging context</li>\n<li>Experience with LLM-based product features, including prompt engineering, retrieval-augmented generation, or fine-tuning</li>\n</ul>\n<p>How We&#39;ll Take Care of You:</p>\n<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>\n<p>Pay Range: $204,000-$255,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e3b1c38b-ef1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7655958","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$204,000-$255,000 USD","x-skills-required":["ML-powered product features","model serving","feature pipelines","online/offline evaluation","monitoring","architectural patterns","NLP/NLU techniques","large language models","messaging","conversational AI","content understanding","distributed, real-time systems","real-time messaging systems","event-driven architectures","ML infrastructure","feature stores","model registries","online inference platforms","trust & safety","content moderation","internationalization","LLM-based product features","prompt engineering","retrieval-augmented generation","fine-tuning"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:16.839Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML-powered product features, model serving, feature pipelines, online/offline evaluation, monitoring, architectural patterns, NLP/NLU techniques, large language models, messaging, conversational AI, content understanding, distributed, real-time systems, real-time messaging systems, event-driven architectures, ML infrastructure, feature stores, model registries, online inference platforms, trust & safety, content moderation, internationalization, LLM-based product features, prompt engineering, retrieval-augmented generation, fine-tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":204000,"maxValue":255000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e03253e3-c7f"},"title":"Safeguards Analyst, Human Exploitation & Abuse","description":"<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>\n<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>\n<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>\n<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>\n</ul>\n<ul>\n<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>\n</ul>\n<ul>\n<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>\n</ul>\n<ul>\n<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>\n</ul>\n<ul>\n<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>\n</ul>\n<ul>\n<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>\n</ul>\n<ul>\n<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>\n</ul>\n<ul>\n<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>\n</ul>\n<ul>\n<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>\n</ul>\n<ul>\n<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>\n</ul>\n<ul>\n<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>\n</ul>\n<ul>\n<li>Strong attention to detail and ability to maintain accurate documentation</li>\n</ul>\n<ul>\n<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>\n</ul>\n<ul>\n<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>\n</ul>\n<ul>\n<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>\n</ul>\n<ul>\n<li>A deep interest in AI safety and responsible technology development</li>\n</ul>\n<ul>\n<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>\n</ul>\n<p><strong>Compensation:</strong></p>\n<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e03253e3-c7f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156333008","x-work-arrangement":"remote-hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["trust and safety","content moderation","counter-exploitation work","SQL","Python","data analysis","detection and review workflows","sensitive content","human trafficking","human exploitation and abuse","sextortion","image-based sexual abuse","commercial sexual exploitation"],"x-skills-preferred":["NGO and industry ecosystem","open-source investigations","threat actor profiling","generative AI products","AI safety and responsible technology development","real-world harm escalation pathways"],"datePosted":"2026-04-18T15:45:00.507Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd9625d9-99b"},"title":"ML Infrastructure Engineer, Safeguards","description":"<p>We are seeking a Machine Learning Infrastructure Engineer to join our Safeguards organization, where you&#39;ll build and scale the critical infrastructure that powers our AI safety systems.</p>\n<p>As part of the Safeguards team, you&#39;ll design and implement ML infrastructure that powers Claude safety. Your work will directly contribute to making AI systems more trustworthy and aligned with human values, ensuring our models operate safely as they become more capable.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem</li>\n<li>Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications</li>\n<li>Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems</li>\n<li>Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards</li>\n<li>Implement automated testing, deployment, and rollback systems for ML models in production safety applications</li>\n<li>Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs</li>\n<li>Contribute to the development of internal tools and frameworks that accelerate safety research and deployment</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment</li>\n<li>Are proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX</li>\n<li>Have hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes)</li>\n<li>Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads</li>\n<li>Have experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems)</li>\n<li>Are results-oriented, with a bias towards reliability and impact in safety-critical systems</li>\n<li>Enjoy collaborating with researchers and translating cutting-edge research into production systems</li>\n<li>Care deeply about AI safety and the societal impacts of your work</li>\n</ul>\n<p>Strong candidates may have experience with:</p>\n<ul>\n<li>Working with large language models and modern transformer architectures</li>\n<li>Implementing A/B testing frameworks and experimentation infrastructure for ML systems</li>\n<li>Developing monitoring and alerting systems for ML model performance and data drift</li>\n<li>Building automated labeling systems and human-in-the-loop workflows</li>\n<li>Experience in trust &amp; safety, fraud prevention, or content moderation domains</li>\n<li>Knowledge of privacy-preserving ML techniques and compliance requirements</li>\n<li>Contributing to open-source ML infrastructure projects</li>\n</ul>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd9625d9-99b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4778843008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","JAX","Cloud platforms (AWS, GCP)","Container orchestration (Kubernetes)","Distributed systems principles","Data engineering tools (Spark, Airflow, streaming systems)"],"x-skills-preferred":["Large language models and modern transformer architectures","A/B testing frameworks and experimentation infrastructure for ML systems","Monitoring and alerting systems for ML model performance and data drift","Automated labeling systems and human-in-the-loop workflows","Trust & safety, fraud prevention, or content moderation domains","Privacy-preserving ML techniques and compliance requirements","Open-source ML infrastructure projects"],"datePosted":"2026-04-18T15:44:06.907Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, JAX, Cloud platforms (AWS, GCP), Container orchestration (Kubernetes), Distributed systems principles, Data engineering tools (Spark, Airflow, streaming systems), Large language models and modern transformer architectures, A/B testing frameworks and experimentation infrastructure for ML systems, Monitoring and alerting systems for ML model performance and data drift, Automated labeling systems and human-in-the-loop workflows, Trust & safety, fraud prevention, or content moderation domains, Privacy-preserving ML techniques and compliance requirements, Open-source ML infrastructure projects","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_74c9dcaa-dfd"},"title":"Policy Design Manager, Age-Appropriate Design","description":"<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>\n<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>\n<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>\n<p>You may be a good fit if you have experience:</p>\n<p>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</p>\n<p>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions.</p>\n<p>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems.</p>\n<p>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development.</p>\n<p>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams.</p>\n<p>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space.</p>\n<p>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations.</p>\n<p>Navigating and prioritizing work efforts amidst ambiguity.</p>\n<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_74c9dcaa-dfd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156326008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["child safety","age assurance","content classification","adult sexual content","policy development","enforcement guidelines","safety interventions"],"x-skills-preferred":["generative AI","classifier development","policy evaluations","product policy decisions","content moderation"],"datePosted":"2026-04-18T15:43:08.493Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI, classifier development, policy evaluations, product policy decisions, content moderation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a4ba044a-dd8"},"title":"Senior Analyst, Legal Operations","description":"<p>We are seeking a skilled Legal Operations Senior Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>You may represent X in witness testimony or other external engagements.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a4ba044a-dd8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090690007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools","global content-removal processes","user-data production","subpoenas","court orders","warrants","MLAT requests"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:24:58.066Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, global content-removal processes, user-data production, subpoenas, court orders, warrants, MLAT requests"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_efeb9a3d-a7a"},"title":"Program Manager - Quality & Training (Safety Operations)","description":"<p>About xAI xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</p>\n<p>About the Role The Program Manager, Quality &amp; Training will lead the strategy, development, and execution of Safety Operations&#39; quality and learning programs. This role is responsible for building scalable systems that improve moderator performance, strengthen critical thinking, and elevate decision-making accuracy.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Develop and execute a comprehensive, automation-first quality and training strategy aligned with Safety Operations and product objectives.</li>\n<li>Integrate AI tools and workflow improvements into learning and quality processes.</li>\n<li>Build scalable onboarding and continuous learning frameworks to support workforce growth.</li>\n<li>Align training and quality initiatives with hiring scales, operational shifts, and performance trends.</li>\n<li>Design and refine training programs focused on policy interpretation, critical thinking, AI tool utilization, and moderation accuracy.</li>\n<li>Create facilitator guides, learning materials, assessments, and reinforcement mechanisms.</li>\n<li>Lead and facilitate training sessions as needed.</li>\n<li>Implement and oversee quality assurance processes, including audits, calibrations, and accuracy tracking.</li>\n<li>Define and monitor KPIs such as quality scores, policy adherence, decision consistency, and training effectiveness.</li>\n<li>Conduct root-cause analysis and implement corrective actions to improve performance outcomes.</li>\n<li>Manage and develop two Safety Trainers, ensuring high-quality facilitation and consistent delivery standards.</li>\n<li>Provide coaching, performance feedback, and professional development support.</li>\n<li>Partner with Product, Policy, and Operations teams to ensure training aligns with evolving platform standards.</li>\n<li>Support compliance and certification initiatives as required.</li>\n</ul>\n<p>Basic Qualifications</p>\n<ul>\n<li>Bachelor&#39;s degree in Education, Business, Operations, Human Resources, or a related field.</li>\n<li>5+ years of experience in program management, training development, quality assurance, or operational enablement.</li>\n<li>Familiarity with AI tools and experience embedding technology into workforce workflows</li>\n<li>Proven experience designing and scaling training programs tied to measurable performance improvements.</li>\n<li>Experience implementing quality frameworks and using data to drive operational decisions.</li>\n<li>Prior people management experience with direct reports.</li>\n<li>Strong facilitation and communication skills.</li>\n<li>Ability to operate effectively in a fast-paced, evolving environment.</li>\n<li>Experience with LMS platforms for content development</li>\n</ul>\n<p>Preferred Skills and Experience</p>\n<ul>\n<li>Experience in Trust &amp; Safety, content moderation, compliance, or similarly sensitive operational environments.</li>\n<li>Experience working in a high-growth or startup environment.</li>\n<li>PMP, Six Sigma, ISO, or other operational excellence certifications.</li>\n<li>Experience with LMS platforms such as Absorb, Articulate 360, Notion would be a bonus.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_efeb9a3d-a7a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5094542007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["program management","training development","quality assurance","operational enablement","AI tools","LMS platforms","facilitation","communication","people management"],"x-skills-preferred":["Trust & Safety","content moderation","compliance","PMP","Six Sigma","ISO","Absorb","Articulate 360","Notion"],"datePosted":"2026-04-18T15:24:49.798Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"program management, training development, quality assurance, operational enablement, AI tools, LMS platforms, facilitation, communication, people management, Trust & Safety, content moderation, compliance, PMP, Six Sigma, ISO, Absorb, Articulate 360, Notion"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f20bf333-e3f"},"title":"Legal Operations Analyst","description":"<p>We are seeking a skilled Legal Operations Analyst to enhance our systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>You may represent X in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>2+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f20bf333-e3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5101856007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:23:41.246Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore, SG"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_febb2521-ab0"},"title":"Global Safety Response Operations Analyst","description":"<p><strong>Compensation\\n\\nThe base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.\\n\\n- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts\\n\\n- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)\\n\\n- 401(k) retirement plan with employer match\\n\\n- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)\\n\\n- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees\\n\\n- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)\\n\\n- Mental health and wellness support\\n\\n- Employer-paid basic life and disability coverage\\n\\n- Annual learning and development stipend to fuel your professional growth\\n\\n- Daily meals in our offices, and meal delivery credits as eligible\\n\\n- Relocation support for eligible employees\\n\\n- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.\\n\\n## About the Team\\n\\nAt OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.\\n\\n## About the Role\\n\\nWe’re looking for experience Trust, Safety, and Risk Operations analysts who have subject matter expertise in one or more of the following areas: policy enforcement and content moderation, fraud and scam prevention, developer risk, or privacy and regulatory escalations. You’ll be on the front lines of safety escalation management, helping to triage and resolve urgent and sensitive cases. You’ll work across subject matter areas, systems, and processes to ensure operational excellence, develop process improvements and automations, and surface insights and trends.\\n\\n## In This Role, You Will:\\n\\n- Handle and resolve high-priority cases across all harm and risk areas, ensuring timely and appropriate resolution in line with policy and legal requirements.\\n\\n- Operate across multiple systems and tools to manage user reports and tickets, internal escalations, and other high priority investigations.\\n\\n- Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation.\\n\\n- Identify and implement process improvements and automation opportunities to increase efficiency, accuracy, and coverage.\\n\\n- Conduct quality reviews and provide feedback to improve consistency across global teams.\\n\\n- Analyze trends and generate insights from escalation and case data to inform policy, product, model behavior, or detection improvements.\\n\\n- Maintain exceptional accuracy, judgment, and composure under pressure when handling sensitive or time-critical situations.\\n\\n- Participate in 24/7 on-call rotation, including off-hours and weekend coverage as needed.\\n\\n## You Might Thrive in This Role If You:\\n\\n- Have 5+ years of experience in trust &amp; safety, content moderation, investigations, fraud, or developer risk operations.\\n\\n- Have experience working in incident response, law enforcement response, or escalations management.\\n\\n- Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact.\\n\\n- Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks.\\n\\n- Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes.\\n\\n- Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence.\\n\\n- Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts.\\n\\n- Thrive in ambiguous, high-autonomy environments and balance speed with diligence.\\n\\n- Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact.\\n\\n## About OpenAI\\n\\nOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.\\n\\n## Required Skills\\n\\n- Trust and Safety\\n\\n- Content Moderation\\n\\n- Investigations\\n\\n- Fraud and Scam Prevention\\n\\n- Developer Risk\\n\\n- Privacy and Regulatory Escalations\\n\\n## Preferred Skills\\n\\n- Incident Response\\n\\n- Law Enforcement Response\\n\\n- Escalations Management\\n\\n- OpenAI Technology\\n\\n- Deep Domain Expertise\\n\\n- Analytical Skills\\n\\n- Communication Skills\\n\\</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_febb2521-ab0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9e77e103-6b70-4b45-a344-d87c4a2d7e12","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$189K – $280K","x-skills-required":["Trust and Safety","Content Moderation","Investigations","Fraud and Scam Prevention","Developer Risk","Privacy and Regulatory Escalations"],"x-skills-preferred":["Incident Response","Law Enforcement Response","Escalations Management","OpenAI Technology","Deep Domain Expertise","Analytical Skills","Communication Skills"],"datePosted":"2026-03-08T22:16:27.787Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust and Safety, Content Moderation, Investigations, Fraud and Scam Prevention, Developer Risk, Privacy and Regulatory Escalations, Incident Response, Law Enforcement Response, Escalations Management, OpenAI Technology, Deep Domain Expertise, Analytical Skills, Communication Skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c76d0c6d-ec7"},"title":"Technical Policy Manager, Cyber Harms","description":"<p><strong>About the Role:</strong></p>\n<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>\n<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>\n<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>\n<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>\n<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>\n<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>\n<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>\n<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>\n<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>\n<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>\n<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>\n<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>\n<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>\n<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>\n<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>\n<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>\n<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>\n<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>\n<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>\n<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>\n<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>\n<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>\n<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>\n<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>\n<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Background in AI/ML systems, particularly experience with large language models</li>\n<li>Experience developing ML-based security systems or adversarial ML research</li>\n<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>\n<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>\n<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>\n<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>\n<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c76d0c6d-ec7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066981008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The annual compensation for this role is not specified in the job posting.","x-skills-required":["cybersecurity","vulnerability research","exploit development","network security","malware analysis","penetration testing","scientific computing","data analysis","programming (Python)","threat modelling","policy frameworks","responsible disclosure practices","vulnerability coordination","cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)"],"x-skills-preferred":["AI/ML systems","large language models","ML-based security systems","adversarial ML research","defence, intelligence, or security organisations","NSA, CISA, national labs, security contractors","published security research","disclosed vulnerabilities","bug bounty programs","Trust & Safety operations","content moderation at scale","OSCP, OSCE, GXPN, or equivalent certifications","dual-use security research concerns","ethical considerations in AI safety"],"datePosted":"2026-03-08T13:50:25.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust & Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6cc383e0-ff6"},"title":"ML Infrastructure Engineer, Safeguards","description":"<p><strong>About the role</strong></p>\n<p>We are seeking a Machine Learning Infrastructure Engineer to join our Safeguards organization, where you&#39;ll build and scale the critical infrastructure that powers our AI safety systems. You&#39;ll work at the intersection of machine learning, large-scale distributed systems, and AI safety, developing the platforms and tools that enable our safeguards to operate reliably at scale.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem</li>\n<li>Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications</li>\n<li>Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems</li>\n<li>Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards</li>\n<li>Implement automated testing, deployment, and rollback systems for ML models in production safety applications</li>\n<li>Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs</li>\n<li>Contribute to the development of internal tools and frameworks that accelerate safety research and deployment</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment</li>\n<li>Are proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX</li>\n<li>Have hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes)</li>\n<li>Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads</li>\n<li>Have experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems)</li>\n<li>Are results-oriented, with a bias towards reliability and impact in safety-critical systems</li>\n<li>Enjoy collaborating with researchers and translating cutting-edge research into production systems</li>\n<li>Care deeply about AI safety and the societal impacts of your work</li>\n</ul>\n<p><strong>Strong candidates may have experience with:</strong></p>\n<ul>\n<li>Working with large language models and modern transformer architectures</li>\n<li>Implementing A/B testing frameworks and experimentation infrastructure for ML systems</li>\n<li>Developing monitoring and alerting systems for ML model performance and data drift</li>\n<li>Building automated labeling systems and human-in-the-loop workflows</li>\n<li>Experience in trust &amp; safety, fraud prevention, or content moderation domains</li>\n<li>Knowledge of privacy-preserving ML techniques and compliance requirements</li>\n<li>Contributing to open-source ML infrastructure projects</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing the state of the art in AI safety and making a meaningful difference in the world.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6cc383e0-ff6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4778843008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $405,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","JAX","AWS","GCP","Kubernetes","Spark","Airflow","streaming systems"],"x-skills-preferred":["large language models","modern transformer architectures","A/B testing frameworks","experimentation infrastructure","monitoring and alerting systems","automated labeling systems","human-in-the-loop workflows","trust & safety","fraud prevention","content moderation domains","privacy-preserving ML techniques","compliance requirements"],"datePosted":"2026-03-08T13:46:05.401Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, JAX, AWS, GCP, Kubernetes, Spark, Airflow, streaming systems, large language models, modern transformer architectures, A/B testing frameworks, experimentation infrastructure, monitoring and alerting systems, automated labeling systems, human-in-the-loop workflows, trust & safety, fraud prevention, content moderation domains, privacy-preserving ML techniques, compliance requirements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_efe846b7-441"},"title":"Enforcement Operations Lead","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p>About the Role</p>\n<p>Anthropic&#39;s Safeguards team is responsible for enforcing our policies, protecting users, and ensuring our platform is not misused. As a Safeguards Enforcement Analyst focused on Safety Evaluations, you&#39;ll play a central role in ensuring our models meet safety and policy standards before and after launch. You&#39;ll run and monitor evaluations, drive mitigations when issues surface, coordinate the creation of new evals, and help build the processes and documentation that allow the team to scale this work over time.</p>\n<p>This role requires someone who is detail-oriented, comfortable navigating ambiguity, and capable of coordinating across teams to break new ground and drive work to completion. This work is deeply cross-functional — you&#39;ll partner closely with policy experts, Safeguards engineering teams, and many other stakeholders throughout the organization to ensure our evaluations are comprehensive and current, and that findings translate into meaningful improvements to model behavior.</p>\n<p>Responsibilities</p>\n<p><em>Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</em></p>\n<p><strong>Vendor Operations</strong></p>\n<ul>\n<li>Own end-to-end management of content moderation vendor relationships, including onboarding, performance management, quality assurance, and capacity planning</li>\n<li>Partner with internal stakeholders to define vendor scope, set SLAs, and evaluate vendor output quality on an ongoing basis</li>\n<li>Identify opportunities to scale content review operations efficiently as Anthropic&#39;s product surface area grows</li>\n<li>Develop and maintain standard operating procedures (SOPs) for all vendor-executed review workflows, ensuring consistency and accuracy across content</li>\n</ul>\n<p><strong>Regulatory Reporting and Enforcement</strong></p>\n<ul>\n<li>Partner with Regulatory Operations to ensure that new product features and content surfaces are incorporated into Safeguards reporting workflows as they launch</li>\n<li>Own enforcement reporting for Regulatory Operations requirements, including maintaining and updating dashboards and tracking mechanisms that provide accurate, timely data to regulatory bodies</li>\n<li>Produce on-request read-outs of enforcement metrics over specified time ranges to support regulatory reporting obligations</li>\n<li>Identify and drive improvements to existing reporting infrastructure — including transitioning manual, spreadsheet-based workflows to more robust and scalable solutions</li>\n<li>Oversee the user-reported content review pipeline, including reviews submitted via the Content Reporting Form across all supported content surfaces</li>\n<li>Ensure SOPs for content review workflows are kept current as new features and surfaces are added</li>\n<li>Work collaboratively with the RegOps team to ensure intake processes are prepared to handle emerging report types (e.g., third-party MCP server reports)</li>\n<li>Maintain a strong understanding of Anthropic&#39;s policy framework to provide informed operational guidance and escalation support</li>\n</ul>\n<p><strong>Copyright Operations</strong></p>\n<ul>\n<li>Oversee Safeguards copyright systems, ensuring the right operational processes are in place to handle copyright-related enforcement at scale</li>\n<li>Partner closely with the Regulatory Operations team to scale copyright operations as Anthropic&#39;s products grow, with a particular focus on reducing false positives and improving the accuracy of copyright enforcement workflows</li>\n<li>Identify gaps in current copyright operational processes and drive cross-functional solutions in collaboration with policy, legal, and engineering stakeholders</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 5+ years of experience in trust and safety operations, content moderation program management, or a related field</li>\n<li>Have managed external vendor or contractor relationships, including performance management and quality assurance</li>\n<li>Are comfortable working across policy, legal, and operations teams to translate compliance requirements into practical workflows</li>\n<li>Have experience building or improving operational reporting, dashboards, or enforcement tracking systems</li>\n<li>Are highly organized, with a track record of maintaining rigorous documentation and SOPs in fast-moving environments</li>\n<li>Communicate clearly and precisely — both in writing and verbally — across technical and non-technical audiences</li>\n<li>Are energized by the challenge of building scalable systems in an environment where not everything is already figured out</li>\n<li>Care deeply about the responsible deployment of AI and the role enforcement operations plays in that mission</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience working with regulatory reporting requirements, particularly in the context of online platforms or AI systems</li>\n<li>Familiarity with content moderation tooling and review workflows at scale</li>\n<li>Experience with copyright enforcement operations, including false positive mitigation strategies</li>\n<li>Background in policy enforcement, legal operations, or compliance program management</li>\n<li>Experience supporting or standing up a new operational function, including writing foundational SOPs and building institutional knowledge from scratch</li>\n<li>Comfort working with data and metrics to inform operational decisions and surface trends to leadership</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_efe846b7-441","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5137185008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The annual compensation range for this role is listed below.","x-skills-required":["trust and safety operations","content moderation program management","vendor relationship management","regulatory reporting","copyright enforcement","policy enforcement","legal operations","compliance program management"],"x-skills-preferred":["content moderation tooling","review workflows at scale","regulatory reporting requirements","AI systems"],"datePosted":"2026-03-08T13:42:56.321Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"trust and safety operations, content moderation program management, vendor relationship management, regulatory reporting, copyright enforcement, policy enforcement, legal operations, compliance program management, content moderation tooling, review workflows at scale, regulatory reporting requirements, AI systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_486aaff1-a4a"},"title":"Safety Response Operations Lead","description":"<p><strong>Safety Response Operations Lead</strong></p>\n<p>At OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.</p>\n<p>The <strong>Global Safety Response Operations</strong> team provides 24/7 coverage for user safety, risk, and regulatory escalations across OpenAI’s products, handling the highest-priority cases that require human judgment and rapid response. The team operates as the core escalation and delivery arm of OpenAI’s safety operations, ensuring that our products remain safe and aligned with policy while enabling timely, empathetic, and consistent user support.</p>\n<p><strong>About the Role</strong></p>\n<p>The Global Safety Response Operations Lead is a hands-on team lead who both manages a regional Safety Response team and personally handles high-risk safety cases. This role combines frontline safety work with people leadership, operational ownership, and cross-functional coordination.</p>\n<p>You will lead a team of Safety Response Analysts who handle OpenAI’s most sensitive and high-impact cases, while also personally contributing to casework for complex, high-risk, or high-visibility issues. You will own the execution of high severity escalations in your region and ensure proper execution and communication to cross functional stakeholders</p>\n<p>You will be accountable for ensuring your region consistently meets utilization, quality, and SLA targets while serving as the operational interface with Product, Policy, Legal, Investigations, and regional stakeholders.</p>\n<p>This is a 24/7 global operation that requires flexibility to support rotating shifts, including nights, weekends, and holidays, as part of a leadership on-call model.</p>\n<p><strong>In This Role, You Will:</strong></p>\n<ul>\n<li>Lead and coach a regional team of Safety Response Analysts, ensuring high performance, engagement, and consistent decision quality.</li>\n</ul>\n<ul>\n<li>Own regional operational outcomes, including utilization, SLA adherence, backlog health, and quality benchmarks.</li>\n</ul>\n<ul>\n<li>Handle and oversee the most complex and high-risk cases, serving as the first line of escalation and incident lead for your region.</li>\n</ul>\n<ul>\n<li>Contribute directly to frontline work (20–30%), including investigations, enforcement decisions, and regulatory or legal escalations.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with Product, Policy, Legal, Investigations, and local market teams to execute safety outcomes and manage risk.</li>\n</ul>\n<ul>\n<li>Drive operational excellence and continuous improvement, improving workflows, tools, automation, and escalation paths.</li>\n</ul>\n<ul>\n<li>Identify emerging risks and trends, translating frontline insights into actionable recommendations for policy, product, or enforcement.</li>\n</ul>\n<p><strong>You Might Thrive in This Role If You:</strong></p>\n<ul>\n<li>You have 5+ years in Trust &amp; Safety, Risk Operations, Investigations, Fraud, Annotation, or platform integrity.</li>\n</ul>\n<ul>\n<li>You have 4+ years of people leadership or senior-level operational ownership.</li>\n</ul>\n<ul>\n<li>You are a strong decision-maker in ambiguous, high-risk environments, able to balance speed, accuracy, and defensibility when handling sensitive or high-impact cases.</li>\n</ul>\n<ul>\n<li>You communicate complex safety and risk decisions clearly and credibly, whether writing escalation narratives, briefing Legal or Policy, or aligning Product and leadership during incidents.</li>\n</ul>\n<ul>\n<li>You can translate between frontline operations and strategic stakeholders, turning messy real-world cases into structured insights and turning policy or product direction into clear, executable guidance for your team.</li>\n</ul>\n<ul>\n<li>You are skilled at influencing without authority, building trust with Product, Policy, Legal, Investigations, and regional partners to drive alignment and resolve ambiguity.</li>\n</ul>\n<ul>\n<li>You are deeply familiar with content moderation, user safety, fraud, or developer risk frameworks, including the legal, policy, and technical considerations that shape enforcement.</li>\n</ul>\n<ul>\n<li>You use data, tooling, and automation to improve quality, efficiency, and scale to not just measure performance, but to make better decisions.</li>\n</ul>\n<ul>\n<li>You are comfortable leading in a 24/7, high-pressure operational environment, providing calm, credible leadership during incidents, spikes, and regulatory or reputational events.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_486aaff1-a4a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/d9be9157-c307-4ac4-9aa6-ad2fe7104808","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Trust & Safety","Risk Operations","Investigations","Fraud","Annotation","Platform Integrity","Content Moderation","User Safety","Fraud","Developer Risk","Data Analysis","Tooling","Automation"],"x-skills-preferred":["Leadership","Communication","Influencing","Data-Driven Decision Making","Operational Excellence","Continuous Improvement"],"datePosted":"2026-03-06T18:36:46.323Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust & Safety, Risk Operations, Investigations, Fraud, Annotation, Platform Integrity, Content Moderation, User Safety, Fraud, Developer Risk, Data Analysis, Tooling, Automation, Leadership, Communication, Influencing, Data-Driven Decision Making, Operational Excellence, Continuous Improvement"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6719209d-c0e"},"title":"Safety Engineer","description":"<p>We&#39;re looking for an experienced AI Safety Engineer to drive the deployment and operationalization of automated moderation and guardrail systems that protect our platform and users across a multimodal space.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design and build scalable backend infrastructure for content moderation, abuse detection and agents guardrails, deploying AI/ML models into production systems</li>\n<li>Architect robust APIs, data pipelines, and service architectures supporting real-time and batch moderation workflows</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>6+ years of backend software engineering experience building production systems at scale</li>\n<li>Strong production backend experience: distributed systems, APIs, data pipelines, and Python expertise (asynchronous Python, backend frameworks)</li>\n<li>Infrastructure &amp; DevOps proficiency: cloud platforms (AWS/GCP), containerization (Docker/K8s), CI/CD pipelines</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6719209d-c0e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ElevenLabs","sameAs":"https://elevenlabs.io","logo":"https://logos.yubhub.co/elevenlabs.io.png"},"x-apply-url":"https://elevenlabs.io/careers/3b57cc5c-f019-4a0b-a5ff-e1046e4f1fa1/safety-engineer","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend software engineering experience","production backend experience","infrastructure & DevOps proficiency"],"x-skills-preferred":["trust & safety","content moderation","MLOps experience"],"datePosted":"2026-02-03T12:05:13.441Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend software engineering experience, production backend experience, infrastructure & DevOps proficiency, trust & safety, content moderation, MLOps experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc406af0-369"},"title":"AI Safety Policy & Operations","description":"<p>We&#39;re looking for someone to join our Safety team and own key outcomes across policy, automation, and enterprise guardrails. You&#39;ll design integrity policies aligned with global regulations, and shape how enterprises implement guardrails when building on our APIs.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Design and evolve safety policies for audio AI, image/video AI and agentic safety. Aligned with ISO42001, EU AI Act, DSA, US state laws, and global regulatory developments</li>\n<li>Build scalable, AI-powered systems and workflows that dramatically reduce response times and increase policy coverage</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Broad experience across Trust &amp; Safety: policy, operations, investigations, and content moderation, not just one specialty</li>\n<li>Deep familiarity with the global AI regulatory landscape: EU AI Act, DSA, US state laws, and emerging frameworks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc406af0-369","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ElevenLabs","sameAs":"https://elevenlabs.io","logo":"https://logos.yubhub.co/elevenlabs.io.png"},"x-apply-url":"https://elevenlabs.io/careers/1a781668-d5a0-4c1f-9b2e-88f7fe821119/ai-safety-policy-operations","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Trust & Safety","policy","operations","investigations","content moderation"],"x-skills-preferred":["audio AI","image/video AI","agentic safety","ISO42001","EU AI Act","DSA","US state laws"],"datePosted":"2026-02-03T12:04:45.595Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United Kingdom"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust & Safety, policy, operations, investigations, content moderation, audio AI, image/video AI, agentic safety, ISO42001, EU AI Act, DSA, US state laws"}]}