<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>86d4c902-c89</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e992e68-7cd</externalid>
      <Title>Staff Engineer, Offensive Security</Title>
      <Description><![CDATA[<p>As a Staff Engineer, Offensive Security at Twilio, you will act as a Technical Lead and design complex attack chains that demonstrate systemic risk. You will spend as much time writing custom code and researching new bypasses as you do executing tests.</p>
<p>In this role, you will:</p>
<p>Perform manual and automated testing of web applications, APIs, and mobile apps (iOS/Android). Conduct network and cloud level assessments with various tooling. Triage and validate reports from automated scanners or bug bounty hunters to eliminate false positives and escalate true positives. Perform initial prompt injection and jailbreak tests on AI prototypes, services, and applications using established checklists (OWASP Top 10 for LLMs). Draft high-quality reports that detail the &quot;path to compromise&quot; with clear, reproducible steps for developers. Manage and update the team&#39;s testing infrastructure (e.g., Burp Suite, and basic C2 listeners). Provide direct technical guidance to engineering teams on how to patch vulnerabilities like XSS, SQLi, and IDOR. Design and lead multi-week Red Team operations that mimic specific threat actors (APTs) to test the SIRT detection capabilities. Build custom payloads, droppers, and obfuscated scripts to bypass EDR/AV and maintain stealth. Build automated testing frameworks for AI systems (e.g., using PyRIT, Promptfoo, or Garak) to test for models related to sensitive data leakage. Execute sophisticated attacks against AWS/Azure/K8s, focusing on IAM misconfigurations and container escapes. Collaborate with SIRT and Detection Engineering to tune SIEM alerts based on the techniques used during an engagement. Oversee the organization&#39;s bug bounty program, identifying trends in submissions to suggest broad architectural security changes.</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Offensive security, Penetration testing, Bug bounty, AppSec, Vulnerability exploitation, MITRE ATT&amp;CK matrix, OWASP Top 10 for web applications, OWASP Top 10 for LLMs, Post exploitation, Adversarial ML, Burp Suite professional, Nmap, Metasploit, Wireshark, LangChain, TensorFlow, C2 frameworks, Python, Bash, C++, Telecom expertise, Excellent written and verbal communication skills, Ability to influence and build effective working relationships with all levels of the organization, Proficiency in multiple languages applicable to the region</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7622285</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e03253e3-c7f</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p><strong>Compensation:</strong></p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>28f97bd7-3d7</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs.</p>
<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p>Responsibilities:</p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f578503a-af9</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>
<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>
<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>
<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>
<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$43.75 - $62.50 USD hourly</Salaryrange>
      <Skills>Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097904007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f818897-404</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Process appeals, audit automations, and properly label use cases in the system.</li>
<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>
<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
<li>Ability to interpret and apply xAI safety policies effectively.</li>
<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>
<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>
<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>
<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>
<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>
</ul>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097907007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4eae45a-3da</externalid>
      <Title>Abuse Investigator - Child Safety</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.</p>
<p>The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications.</p>
<p><strong>About the Role</strong></p>
<p>As a Child Safety Investigator on the Intelligence &amp; Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies.</p>
<p>This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting.</p>
<p>You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Investigate high-severity child safety violations and disrupt malicious actors in partnership with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including through cross-platform and cross-internet research</li>
</ul>
<ul>
<li>Support investigations across other high-risk harm areas where child safety concerns intersect</li>
</ul>
<ul>
<li>Conduct open-source and cross-platform research to contextualize actors and abuse networks</li>
</ul>
<ul>
<li>Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python</li>
</ul>
<ul>
<li>Communicate investigation findings clearly and effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries</li>
</ul>
<ul>
<li>Develop a deep, working understanding of OpenAI’s products, internal data systems, and enforcement mechanisms</li>
</ul>
<ul>
<li>Collaborate with engineering and data partners to improve investigative tooling, data quality, and analyst workflows</li>
</ul>
<ul>
<li>Support time-sensitive escalations and high-priority investigations requiring rapid analysis and sound judgment</li>
</ul>
<ul>
<li>Represent investigative findings and work externally with the press, governments, NGOs, and law enforcement agencies</li>
</ul>
<ul>
<li>Participate in a rotating on-call schedule to support timely response to high-priority safety incidents and sensitive investigations</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have deep expertise in online child safety and child exploitation threats</li>
</ul>
<ul>
<li>Have familiarity or proficiency with technical investigations, especially using SQL, Python, Notebooks and scripts in a government, law enforcement and/or tech-company setting</li>
</ul>
<ul>
<li>Speak one or more languages in addition to English</li>
</ul>
<ul>
<li>Have at least 5+ years of experience tracking threat actors in abuse domains</li>
</ul>
<ul>
<li>Have worked on time-sensitive escalations involving high-risk harm</li>
</ul>
<ul>
<li>Have presented analytic findings to senior stakeholders or external partners</li>
</ul>
<ul>
<li>Have experience scaling and automating processes, especially with language models</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>Hybrid</Workarrangement>
      <Salaryrange>$158.4K – $425K</Salaryrange>
      <Skills>SQL, Python, Databricks, Notebooks, Scripts, Language models, Technical investigations, Child safety and child exploitation threats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/19b9af1a-6a6e-42e3-824b-a9f3794fef2b</Applyto>
      <Location>San Francisco; Remote - US</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>0ef383eb-d73</externalid>
      <Title>Abuse Investigator (CBRN)</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.</p>
<p>The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.</p>
<p><strong>About the Role</strong></p>
<p>As an Abuse Investigator on the Intelligence and Investigations team, you will be responsible for detecting misuse of our platform or services. Specifically, you will focus on cases where users attempt to use our platform in connection with prohibited activities such as developing or delivering biological and/or chemical threats to harm people, critical resources/infrastructure, or the environment. OpenAI has strict prohibitions and policies in this area, and you will detect, disrupt, and enforce on actors who violate our policies.</p>
<p>This role requires domain-specific expertise, experience investigating sophisticated threats, and the ability to navigate ambiguous signals in a complex and adversarial threat environment.</p>
<p>You will respond to time-sensitive escalations and will be expected to present your investigative work, both in writing and verbally, to key stakeholders across government, industry, and civil society, when required. You will also help inform the company’s evolving threat response and integrity monitoring and mitigation stack, while working closely on individual cases and enforcement assessments.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Detect, investigate, and disrupt the attempted misuse of OpenAI products for the development or dissemination of biological threats, including dual-use misuse and emerging biothreat vectors. You will also be expected to work across related domains (e.g., chemical threats).</li>
</ul>
<ul>
<li>Partner closely with teams across Policy, Legal, Integrity, Global Affairs, and Security to conduct robust investigations, including cross-internet and open-source research to trace and understand abuse and ensure OpenAI’s mitigations address evolving needs in the space.</li>
</ul>
<ul>
<li>Develop abuse signals and tracking strategies to proactively detect users attempting dual-use or biohazard-related misuse of our platform and review content for enforcement decisions.</li>
</ul>
<ul>
<li>Communicate findings from your investigations with internal stakeholders and leadership and, at times, external partners including regulatory or scientific organizations.</li>
</ul>
<ul>
<li>Develop a categorical understanding of our product surfaces in the biosecurity space, and work with teams to improve data visibility and internal tooling.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have industry-leading experience in biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), or related biodefense fields,</li>
</ul>
<ul>
<li>Have strong familiarity with technical investigations, especially using SQL and Python, in a government/military and/or tech company</li>
</ul>
<ul>
<li>Have demonstrated experience in risk-mitigation (e.g., adversarial thinking and record of success in threat mitigation)</li>
</ul>
<ul>
<li>Have worked on investigations related to biological threat actors, malicious dual-use exploitation, or responsible innovation in synthetic biology or bioengineering</li>
</ul>
<ul>
<li>Have at least 5+ years of experience tracking misuse and/or abuse in biosecurity or life sciences domains, or equivalent education in these domains</li>
</ul>
<ul>
<li>Have at least 2 years of experience developing innovative detection solutions and conducting open-ended research to solve real-world problems</li>
</ul>
<ul>
<li>Experience in presenting analytical work in public or policy settings</li>
</ul>
<ul>
<li>Have experience scaling and automating processes, especially with language models</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protec</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>Remote</Workarrangement>
      <Salaryrange>$230.4K – $425K</Salaryrange>
      <Skills>biosecurity, biological weapons non-proliferation, dual-use research of concern (DURC), biodefense, SQL, Python, risk-mitigation, adversarial thinking, threat mitigation, biological threat actors, malicious dual-use exploitation, responsible innovation in synthetic biology or bioengineering, misuse and/or abuse in biosecurity or life sciences domains, innovative detection solutions, open-ended research, analytical work in public or policy settings, scaling and automating processes, language models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a privately held company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/5d618f84-fcce-496c-bfe9-995bd9ff9065</Applyto>
      <Location>Remote - US; San Francisco; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b0cdccea-4ed</externalid>
      <Title>Offensive Security Research Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for vulnerability researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for LLMs to enable adversaries to cause harm by automating the attacks that today are carried out by human cybercrime groups, but in the future may be easily carried out by humans misusing LLMs. We are hiring security specialists who are experienced at exploitation and remediation, and are interested in understanding how LLMs could cause harm in the future, so that we can better prepare for this future and mitigate these risks before they arise.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Triage any vulnerabilities discovered, coordinate and assist the external and open-source community in remediation</li>
<li>Write scaffolds designed to automate typical traditional attack techniques to help clarify our defensive problem selection</li>
<li>Research how adversaries might misuse LLMs to identify and exploit vulnerabilities at scale in the future</li>
<li>Develop promising defensive strategies that could mitigate the ability of adversaries to misuse models in harmful ways</li>
<li>Work with a small, senior team of engineers and researchers to enact a forward-looking security plan</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>3+ years experience with pentesting, vulnerability research, or other offensive security experience</li>
<li>Senior-level knowledge in at least one related topic area (reverse engineering, network security, exploitation, physical security)</li>
<li>A history demonstrating desire to do the &#39;dirty work&#39; that results in high-quality outputs</li>
<li>Software engineering experience</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organisational dynamics</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Published research papers on computer security, language modeling, or related topics; or given talks at Defcon, Blackhat, CCC, or related venues</li>
<li>Familiarity with large language models and how they work; for example, you may have written agent scaffolds</li>
<li>Reported CVEs, or been awarded for bug bounty vulnerabilities</li>
<li>Contributed to open-source projects in LLM- or security-adjacent repositories</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>pentesting, vulnerability research, offensive security, reverse engineering, network security, exploitation, physical security, software engineering, communication skills, large language models, agent scaffolds, CVEs, bug bounty vulnerabilities, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems. The company is headquartered in San Francisco, CA.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5123011008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>58f5680d-d38</externalid>
      <Title>Délégué Technique (F/H)</Title>
      <Description><![CDATA[<p><strong>DELEGUE(E) TECHNIQUE (F/H)</strong></p>
<p>Le Mans</p>
<p><strong>Les missions du poste</strong></p>
<p>Rattaché au Responsable Technique, le <strong>Délégué Technique</strong> rejoindra l’équipe permanente de l’ACO, en charge de l’élaboration et de l’application de la réglementation technique.</p>
<p>Il interviendra principalement sur les championnats <strong>ALMS</strong> et <strong>ELMS</strong>, ainsi que sur leurs championnats support, et représentera l’ACO auprès des constructeurs, concurrents et fédérations partenaires.</p>
<p><strong>Vous serez amené à :</strong></p>
<p><strong>Avant les événements :</strong></p>
<p>•Gérer en autonomie les aspects techniques des épreuves organisées par l’ACO</p>
<p>•Former les équipes de contrôle en lien avec la Direction</p>
<p>•Préparer les notes et bulletins techniques</p>
<p>•Organiser les contrôles techniques et assurer le lien avec les officiels</p>
<p><strong>Pendant les événements :</strong></p>
<p>•Assurer le rôle de Délégué Technique, conformément au Code Sportif International</p>
<p>•Réaliser les contrôles techniques (initiaux, inopinés, finaux)</p>
<p>•Produire rapports et relevés dans les règles de l’art</p>
<p><strong>Après les événements :</strong></p>
<p>•Participer aux débriefings opérationnels et techniques</p>
<p>•Produire un reporting détaillé</p>
<p>•Contribuer à l’évolution des règlements techniques et sportifs (LMP2, LMP3, LMGT3)</p>
<p><strong>En transversal :</strong></p>
<p>•Coordonner l’activité des pôles spécialisés (Opérations, Performance, Électronique) sur le périmètre championnat</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>13ème mois</Salaryrange>
      <Skills>Diplôme d’ingénieur généraliste (ex. mécanique), Première expérience réussie dans le sport automobile (ingénieur, exploitation…), Résistance au stress, diplomatie, rigueur et discrétion, Aisance avec le travail à distance, Maîtrise du français et de l’anglais, à l’écrit comme à l’oral, Maîtrise du Pack Office et des outils collaboratifs (MS Teams, Google Drive, SharePoint…), Disponibilité pour environ 10 déplacements annuels, incluant week-ends et jours fériés, Maîtrise de la réglementation technique, Connaissance des championnats ALMS et ELMS, Expérience dans la coordination d’événements sportifs</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Automobile Club de l&apos;Ouest</Employername>
      <Employerlogo>https://logos.yubhub.co/recrutement.lemans.org.png</Employerlogo>
      <Employerdescription>The Automobile Club de l&apos;Ouest (ACO) is a French motorsport organisation that creates and organises the 24 Hours of Le Mans, a legendary endurance racing event held since 1923 on the Circuit de la Sarthe.</Employerdescription>
      <Employerwebsite>https://recrutement.lemans.org</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://recrutement.lemans.org/offer/11284-NDQ0NTMtcm9MNDNw</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>