{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/specialized-detection-tools"},"x-facet":{"type":"skill","slug":"specialized-detection-tools","display":"Specialized detection tools","count":1},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f818897-404"},"title":"Senior Analyst - Safety Operations (CSE)","description":"<p><strong>About the Role</strong></p>\n<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Process appeals, audit automations, and properly label use cases in the system.</li>\n<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>\n<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>\n<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>\n<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>\n<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>\n<li>Ability to interpret and apply xAI safety policies effectively.</li>\n<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>\n<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>\n<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>\n<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>\n<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>\n<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>\n<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>\n</ul>\n<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f818897-404","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5097907007","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","Child Sexual Abuse Material (CSAM)","Child Sexual Exploitation (CSE)","Online safety","Risk assessment","Ethical reasoning","Data analysis","Automation tools","Social media","Generative AI"],"x-skills-preferred":["Red-teaming","Adversarial testing","Trust and Safety","Child safety organizations","Specialized detection tools","Classifier development"],"datePosted":"2026-04-18T15:25:17.446Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bastrop, TX"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development"}]}