{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/child-safety"},"x-facet":{"type":"skill","slug":"child-safety","display":"Child Safety","count":9},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25cacbc0-046"},"title":"Senior Analyst, Legal Operations","description":"<p>We are seeking a skilled Legal Operations Senior Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>You may represent X in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>5+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25cacbc0-046","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5090690007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:57:37.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d14ace5b-870"},"title":"Legal Operations Analyst","description":"<p>Job Description:</p>\n<p>We are seeking a skilled Legal Operations Analyst to enhance xAI&#39;s systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>Represent xAI in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>2+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d14ace5b-870","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5101856007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:56:23.289Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore, SG"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb74277e-4ee"},"title":"Policy Design Manager, Age-Appropriate Design","description":"<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>\n<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>\n<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Serve as an internal subject matter expert, leveraging deep expertise in child safety, adult content, youth development, and age-appropriate design to:</li>\n</ul>\n<ul>\n<li>Draft new policies that help govern the responsible use of our models for emerging capabilities and use cases</li>\n</ul>\n<ul>\n<li>Design evaluation frameworks for testing model performance in areas of expertise</li>\n</ul>\n<ul>\n<li>Conduct regular reviews and testing of existing policies to identify and address gaps and ambiguities</li>\n</ul>\n<ul>\n<li>Review flagged content to drive enforcement and policy improvements</li>\n</ul>\n<ul>\n<li>Update our usage policies based on feedback collected from external experts, our enforcement team, and edge cases that you will review</li>\n</ul>\n<ul>\n<li>Work with safeguards product teams to identify and mitigate concerns, and collaborate on designing appropriate interventions for users across different age groups</li>\n</ul>\n<ul>\n<li>Advise on age assurance approaches and content classification frameworks in partnership with Enforcement, Product, Engineering, and Legal teams</li>\n</ul>\n<ul>\n<li>Educate and align internal stakeholders around our policies and our approach to safety in your focus area(s)</li>\n</ul>\n<ul>\n<li>Keep up to date with new and existing AI policy norms, regulatory requirements (e.g., age-appropriate design codes), and industry standards, and use these to inform our decision-making on policy areas</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</li>\n</ul>\n<ul>\n<li>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions</li>\n</ul>\n<ul>\n<li>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems</li>\n</ul>\n<ul>\n<li>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development</li>\n</ul>\n<ul>\n<li>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams</li>\n</ul>\n<ul>\n<li>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space</li>\n</ul>\n<ul>\n<li>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations</li>\n</ul>\n<ul>\n<li>Navigating and prioritizing work efforts amidst ambiguity</li>\n</ul>\n<p><strong>Salary:</strong></p>\n<p>$245,000-$285,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb74277e-4ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156326008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["child safety","age assurance","content classification","adult sexual content","policy development","enforcement guidelines","safety interventions","generative AI products","classifier development","product policy decisions","content moderation"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:48.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI products, classifier development, product policy decisions, content moderation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_74c9dcaa-dfd"},"title":"Policy Design Manager, Age-Appropriate Design","description":"<p>As a Safeguards Policy Design Manager, you will be responsible for developing usage policies, clarifying enforcement guidelines, and advising on safety interventions for our products and services. Your core focus will be on age-appropriate design and experiences, including child safety, age assurance, content classification, and adult sexual content.</p>\n<p>You will help define best practices for developers building on claude for deployment to users across different developmental stages, design age-assurance policies that protect minors from inappropriate content and interactions, and establish clear boundaries for adult content and experiences. In addition, you will advise teams on opportunities for age-appropriate helpfulness, including advising cross-functional teams on beneficial use cases for younger users where appropriate.</p>\n<p>Safety is core to our mission and you’ll help shape policy creation and development so that our users can safely interact with and build on top of our products in a harmless, helpful and honest way.</p>\n<p>You may be a good fit if you have experience:</p>\n<p>As a researcher, subject matter expert, or trust &amp; safety professional working in one or more of the following focus areas: child safety, youth online safety, age assurance, developmental science, content classification and rating systems, or adult content policy.</p>\n<p>Drafting or updating product and / or user policies, with the ability to effectively bridge technical and policy discussions.</p>\n<p>Designing or implementing age-appropriate experiences, age assurance mechanisms, or content classification / labeling systems.</p>\n<p>Working with generative AI products, including writing effective prompts for policy evaluations and classifier development.</p>\n<p>Aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Public Policy, and Legal teams.</p>\n<p>Understanding the challenges that exist in developing and implementing product policies at scale, including in the content moderation space.</p>\n<p>Thinking creatively about the risks and benefits of new technologies, and leveraging data and research to inform policy recommendations.</p>\n<p>Navigating and prioritizing work efforts amidst ambiguity.</p>\n<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_74c9dcaa-dfd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5156326008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$245,000-$285,000 USD","x-skills-required":["child safety","age assurance","content classification","adult sexual content","policy development","enforcement guidelines","safety interventions"],"x-skills-preferred":["generative AI","classifier development","policy evaluations","product policy decisions","content moderation"],"datePosted":"2026-04-18T15:43:08.493Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"child safety, age assurance, content classification, adult sexual content, policy development, enforcement guidelines, safety interventions, generative AI, classifier development, policy evaluations, product policy decisions, content moderation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":285000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f578503a-af9"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>\n<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>\n<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>\n<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>\n<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f578503a-af9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097904007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$43.75 - $62.50 USD hourly","x-skills-required":["Improving Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)","Online safety and reducing harm","Ethical reasoning and risk assessment","Data analysis"],"x-skills-preferred":["Experience working in a Trust and Safety for a social media company","Collaborating with child safety organizations","Red-teaming and adversarial testing of Large Language Models"],"datePosted":"2026-04-18T15:25:26.718Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f818897-404"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Process appeals, audit automations, and properly label use cases in the system.</li>\n<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>\n<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret and apply xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>\n<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>\n<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f818897-404","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097907007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM)","Child Sexual Exploitation (CSE)","Online safety","Risk assessment","Ethical reasoning","Data analysis","Automation tools","Social media","Generative AI"],"x-skills-preferred":["Red-teaming","Adversarial testing","Trust and Safety","Child safety organizations","Specialized detection tools","Classifier development"],"datePosted":"2026-04-18T15:25:17.446Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f20bf333-e3f"},"title":"Legal Operations Analyst","description":"<p>We are seeking a skilled Legal Operations Analyst to enhance our systems and operations by providing deep expertise in assessing and handling legal requests from government entities all over the world.</p>\n<p>In this role, you will process high-volume content-removal requests under local laws (hate speech, defamation, national security, data-privacy statutes) as well as productions of user information in response to legal process (subpoenas, court orders, warrants, MLAT requests, etc.).</p>\n<p>You will leverage your expertise in legal operations, regulatory compliance, and content moderation to support both day-to-day execution and the optimization of AI-driven automation. You will collaborate with technical teams to design, train, and refine AI agents, curate high-quality training data from real cases, and build tools that scale operations while maintaining accuracy and speed.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Join on an on-call rotation, working closely with other members of Safety to provide timely responses to emergency requests and proactive referrals from all over the world.</li>\n</ul>\n<ul>\n<li>Handle global legal information and content removal requests, including document intake and processing.</li>\n</ul>\n<ul>\n<li>Execute and quality-control complex content-removal and user-data-production cases across multiple jurisdictions while applying and interpreting platform policies shaped by evolving legal requirements.</li>\n</ul>\n<ul>\n<li>Serve as the go-to escalation point for ambiguous or high-risk legal requests, exercising sound judgment and ensuring compliance.</li>\n</ul>\n<ul>\n<li>Continuously improve AI agents that automate triage, initial decisioning, redaction, compliance checks, and response workflows.</li>\n</ul>\n<ul>\n<li>Create and maintain high-quality training datasets, evaluation rubrics, and feedback loops using real Legal Operations cases to enhance AI performance.</li>\n</ul>\n<ul>\n<li>Identify automation opportunities and collaborate with technical teams to build end-to-end workflows using automation tools.</li>\n</ul>\n<ul>\n<li>Measure and report on automation coverage, accuracy, risk reduction, and efficiency gains while training and upskilling the broader Legal Operations team.</li>\n</ul>\n<ul>\n<li>Analyze complex legal and compliance problems in partnership with legal stakeholders to ensure platform rules and regulatory requirements are followed.</li>\n</ul>\n<ul>\n<li>Interpret, analyze, and execute tasks based on evolving instructions and regulatory changes, maintaining precision and adaptability in partnership with cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>You may represent X in witness testimony or other external engagements.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>2+ years of hands-on professional experience in legal operations, trust &amp; safety, content moderation, compliance, or e-discovery at a major technology or social media company.</li>\n</ul>\n<ul>\n<li>Demonstrated expertise in global content-removal processes and/or user-data production in response to legal requests (subpoenas, MLATs, court orders, and local law enforcement demands).</li>\n</ul>\n<ul>\n<li>Proficiency in reading and writing professional English with excellent communication, interpersonal, analytical, and organizational skills.</li>\n</ul>\n<ul>\n<li>Strong technical aptitude, including experience with prompt engineering, AI workflows, or automation tools in a regulated environment.</li>\n</ul>\n<ul>\n<li>Excellent reading comprehension and the ability to exercise autonomous judgment with limited or ambiguous data.</li>\n</ul>\n<ul>\n<li>Passion for technological advancements and using AI to amplify human expertise in legal and compliance processes.</li>\n</ul>\n<p>Preferred Skills and Qualifications:</p>\n<ul>\n<li>Relevant certification, license, or advanced training, specifically in areas such as: copyright, privacy laws, child safety, hate speech, incitement, harassment, or misinformation laws by region.</li>\n</ul>\n<ul>\n<li>Comfort with recording audio or video sessions for data collection.</li>\n</ul>\n<ul>\n<li>Familiarity with AI workflows in a technical setting.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f20bf333-e3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5101856007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["legal operations","regulatory compliance","content moderation","prompt engineering","AI workflows","automation tools"],"x-skills-preferred":["copyright","privacy laws","child safety","hate speech","incitement","harassment","misinformation laws"],"datePosted":"2026-04-18T15:23:41.246Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore, SG"}},"employmentType":"FULL_TIME","occupationalCategory":"Legal","industry":"Technology","skills":"legal operations, regulatory compliance, content moderation, prompt engineering, AI workflows, automation tools, copyright, privacy laws, child safety, hate speech, incitement, harassment, misinformation laws"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4eae45a-3da"},"title":"Abuse Investigator - Child Safety","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.</p>\n<p>The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Child Safety Investigator on the Intelligence &amp; Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies.</p>\n<p>This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting.</p>\n<p>You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Investigate high-severity child safety violations and disrupt malicious actors in partnership with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including through cross-platform and cross-internet research</li>\n</ul>\n<ul>\n<li>Support investigations across other high-risk harm areas where child safety concerns intersect</li>\n</ul>\n<ul>\n<li>Conduct open-source and cross-platform research to contextualize actors and abuse networks</li>\n</ul>\n<ul>\n<li>Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python</li>\n</ul>\n<ul>\n<li>Communicate investigation findings clearly and effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries</li>\n</ul>\n<ul>\n<li>Develop a deep, working understanding of OpenAI’s products, internal data systems, and enforcement mechanisms</li>\n</ul>\n<ul>\n<li>Collaborate with engineering and data partners to improve investigative tooling, data quality, and analyst workflows</li>\n</ul>\n<ul>\n<li>Support time-sensitive escalations and high-priority investigations requiring rapid analysis and sound judgment</li>\n</ul>\n<ul>\n<li>Represent investigative findings and work externally with the press, governments, NGOs, and law enforcement agencies</li>\n</ul>\n<ul>\n<li>Participate in a rotating on-call schedule to support timely response to high-priority safety incidents and sensitive investigations</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep expertise in online child safety and child exploitation threats</li>\n</ul>\n<ul>\n<li>Have familiarity or proficiency with technical investigations, especially using SQL, Python, Notebooks and scripts in a government, law enforcement and/or tech-company setting</li>\n</ul>\n<ul>\n<li>Speak one or more languages in addition to English</li>\n</ul>\n<ul>\n<li>Have at least 5+ years of experience tracking threat actors in abuse domains</li>\n</ul>\n<ul>\n<li>Have worked on time-sensitive escalations involving high-risk harm</li>\n</ul>\n<ul>\n<li>Have presented analytic findings to senior stakeholders or external partners</li>\n</ul>\n<ul>\n<li>Have experience scaling and automating processes, especially with language models</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4eae45a-3da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/19b9af1a-6a6e-42e3-824b-a9f3794fef2b","x-work-arrangement":"Hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$158.4K – $425K","x-skills-required":["SQL","Python","Databricks","Notebooks","Scripts"],"x-skills-preferred":["Language models","Technical investigations","Child safety and child exploitation threats"],"datePosted":"2026-03-08T22:16:05.502Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Databricks, Notebooks, Scripts, Language models, Technical investigations, Child safety and child exploitation threats","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":158400,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c793fc87-c0b"},"title":"Product Policy – Policy Manager (Child Safety)","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Product Policy – Policy Manager (Child Safety)</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Product Policy</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$261K – $290K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Product Policy team is responsible for the development, implementation, enforcement, and communication of the policies that govern use of OpenAI’s services, including ChatGPT, GPTs, the GPT store, Sora, and the OpenAI API. As a member of this team, you will be instrumental in developing policy approaches to best enable both innovative and responsible use of AI so that our groundbreaking technologies are truly used to benefit all people.</p>\n<p>As a member of the Product Policy team, you will leverage an understanding of AI technology, consumer and developer products, as well as the policy landscape to help mitigate risks and ensure OpenAI’s products benefit all of humanity. This role will shape Product Policy’s approach on child safety related policies to support our growing investments in protecting children and young people across our platforms. This will include crafting policies with our partner teams to ensure the best possible user experiences with our tools and working closely with investigative, integrity, safety, and ops teams to detect and address misuse.</p>\n<p>We’re looking for candidates with deep expertise in the child safety ecosystem — particularly as it relates to generative AI — who can partner closely with product and legal teams to shape responsible platform policy. Ideal candidates bring a strong understanding of the child safety and AI policy landscape and are able to translate that expertise into clear, practical guidance for product teams and company leadership. As OpenAI continues to scale, this role will play a key part in aligning diverse stakeholders across product, legal, research, global affairs, and communications to advance a coherent and trusted approach to child safety. Comfort navigating complexity and ambiguity is essential.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Develop and maintain child-safety policy frameworks that govern how OpenAI products are designed, launched, and operated, including safeguards against exploitation, grooming, harmful content, and other child-related risks.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with Product, User Ops, Legal, Safety, Integrity, Communications, and Global Affairs to align on risk posture, product launches, incident response, and external commitments.</li>\n</ul>\n<ul>\n<li>Translate policy into practice by creating clear implementation standards, enforcement protocols, and escalation paths that engineering, operations, and integrity teams can embed directly into product and trust &amp; safety systems.</li>\n</ul>\n<ul>\n<li>Identify opportunities to leverage data to inform our policy work.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>4+ years of experience in child safety, trust &amp; safety, policy, investigations, intelligence, or a related field, with exposure to how technology platforms manage child-related risks.</li>\n</ul>\n<ul>\n<li>Have specific &amp; deep experience with _product policy_ work to partner with technical teams and shape responsible product development.</li>\n</ul>\n<ul>\n<li>Possess excellent communication skills with demonstrated ability to communicate with product managers, engineers, researchers, and executives alike</li>\n</ul>\n<ul>\n<li>Are comfortable with ambiguity and enjoy going 0 to 1</li>\n</ul>\n<ul>\n<li>Are a creative thinker with an eye for opportunities to leverage data to inform policies</li>\n</ul>\n<ul>\n<li>Have an understanding of how government, enterprise, and other stakeholders think about child safety related policy issues</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core,</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c793fc87-c0b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9641aa12-4a70-4b61-9e79-5cb7adcc41f8","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$261K – $290K • Offers Equity","x-skills-required":["child safety","trust & safety","policy","investigations","intelligence","product policy","generative AI","AI technology","consumer and developer products","policy landscape"],"x-skills-preferred":["data analysis","communication","problem-solving","collaboration","leadership"],"datePosted":"2026-03-06T18:37:52.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"child safety, trust & safety, policy, investigations, intelligence, product policy, generative AI, AI technology, consumer and developer products, policy landscape, data analysis, communication, problem-solving, collaboration, leadership","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":261000,"maxValue":290000,"unitText":"YEAR"}}}]}