<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1c4de3ab-a58</externalid>
      <Title>Machine Learning Engineer, Global Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Machine Learning Engineer to bridge the gap between frontier research and real-world impact. As a key member of our GPS Engineering team, you will lead the charge in research into Agent design, Deep Research and AI Safety/reliability, developing novel methodologies that not only power public sector applications but set new standards across the entire Scale organisation.</p>
<p>Your mission is threefold:</p>
<ul>
<li>Frontier Research &amp; Publication: Leading research into LLM/agent capabilities, reasoning, and safety, with the goal of publishing at top-tier venues (NeurIPS, ICML, ICLR).</li>
<li>Cross-Org Impact: Developing generalised techniques in Agent design, AI Safety and Deep Research agents that scale across our commercial and government platforms.</li>
<li>Mission-Critical Applications: Engineering high-stakes AI systems that impact millions of citizens globally.</li>
</ul>
<p>You will:</p>
<ul>
<li>Pioneer Novel Architectures: Design and train state-of-the-art models and agents, moving beyond “off-the-shelf” solutions to create custom architectures for complex public sector reasoning tasks.</li>
<li>Lead AI Safety Initiatives: Research and implement robust safety frameworks, including red teaming, alignment (RLHF/DPO), and bias mitigation strategies essential for sovereign AI.</li>
<li>Drive Deep Research Capabilities: Develop agents capable of long-horizon reasoning and autonomous information synthesis to solve complex problems for national security and public policy.</li>
<li>Publish and Contribute: Represent Scale in the broader research community by publishing high-impact papers and contributing to open-source breakthroughs.</li>
<li>Consult as a Subject Matter Expert: Act as a technical authority for public sector leaders, advising on the theoretical limits and safety requirements of emerging AI.</li>
<li>Build Evaluation Frontiers: Create new benchmarks and evaluation protocols that define what success looks like for high-stakes, non-commercial AI applications.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Advanced Degree: PhD or Master’s in Computer Science, Mathematics, or a related field with a focus on Deep Learning.</li>
<li>Research Track Record: A portfolio of first-author publications at major conferences (NeurIPS, ICML, CVPR, EMNLP, etc.).</li>
<li>Engineering Rigour: Strong proficiency in Python, deep learning frameworks (PyTorch/JAX), with the ability to write production-ready code that scales.</li>
<li>Safety Expertise: Experience in alignment, robustness, or interpretability research.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with large-scale distributed training on massive clusters.</li>
<li>Experience in building agentic systems that are reliable.</li>
<li>Experience in Sovereign AI or working with highly regulated data environments.</li>
<li>A zero-to-one mindset: Comfortable navigating ambiguity and defining research directions from scratch.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep Learning, PyTorch, JAX, AI Safety, Alignment, Robustness, Interpretability, Large-scale Distributed Training, Agentic Systems, Sovereign AI, Regulated Data Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4413274005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>46488746-59d</externalid>
      <Title>Technical Program Management, Alignment</Title>
      <Description><![CDATA[<p>Own 3–4 special projects at a time, driving model evaluation projects end-to-end, managing external research collaborators, synthesizing complex information, identifying new projects, and running team offsites.</p>
<p>This role requires 5+ years of experience in chief-of-staff, program management, operations, or similar roles in a research, technical, or fast-moving environment.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Scope, plan, and drive model evaluation projects end-to-end</li>
<li>Manage external research collaborators</li>
<li>Synthesize complex information into decision-relevant inputs for leadership</li>
<li>Identify new projects the company should take on</li>
<li>Run the Alignment team offsite and similar events</li>
</ul>
<p>Strong candidates may also have experience working directly with researchers, especially in AI safety or machine learning, and familiarity with the AI safety research landscape, key organizations, and ongoing debates.</p>
<p>Annual compensation range: $210,000-$290,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$210,000-$290,000 USD</Salaryrange>
      <Skills>chief-of-staff, program management, operations, research, technical, fast-moving environment, model evaluation, external research collaborators, complex information synthesis, project identification, team offsite management, AI safety, machine learning, AI safety research landscape, key organizations, ongoing debates</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5187208008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>faee4fa2-887</externalid>
      <Title>Prompt Engineer, Claude Code</Title>
      <Description><![CDATA[<p>As a Prompt Engineer on the Claude Code team, you&#39;ll own Claude&#39;s behaviours specifically within Claude Code , ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product.</p>
<p>This is a highly specialized role sitting at the intersection of model behaviour and product quality. You&#39;ll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you&#39;re the person making sure Claude Code feels right within days , not weeks.</p>
<p>You&#39;ll work closely with Model Quality and Research to understand emergent behaviours and behavioural regressions, and with product and safeguards teams to respond quickly when something goes wrong.</p>
<p>This role requires someone who can move fast on behavioural tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You&#39;ll need strong prompting skills, excellent judgment about model behaviours, and the collaborative skills to work across product, safeguards, and research teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Own Claude Code&#39;s system prompts for each new model snapshot, ensuring behaviours feel consistent and well-tuned</li>
<li>Review production prompt changes and serve as a resource for particularly challenging prompting problems involving alignment and reputational risks</li>
<li>Lead incident response for behavioural and policy concerns, coordinating with product and safeguards teams</li>
<li>Scale prompting and evaluation best practices across claude code and product teams.</li>
<li>Deliver product evaluations focused on model behaviours</li>
<li>Define and streamline processes for rolling out prompt changes, including launch criteria and review practices</li>
<li>Create model-specific prompt guides that document quirks and optimal prompting strategies for each release</li>
<li>Collaborate with product teams to translate feature requirements into effective prompts</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Are a power user of agentic coding tools and have strong intuition about model capabilities and limitations</li>
<li>Thrive in high-intensity environments with fast iteration cycles</li>
<li>Take full ownership of problems and drive them to completion independently</li>
<li>Are skilled at creating and maintaining behavioural evaluations</li>
<li>Have strong technical understanding, including comprehension of agent scaffold architectures and model training processes</li>
<li>Are an experienced coder comfortable working in Python and Typescript</li>
<li>Have independently driven changes through production systems with strong execution and responsiveness</li>
<li>Have experience translating user feedback and product needs into coherent prompts and behavioural specifications</li>
<li>Excel at working across organisational boundaries, collaborating effectively with teams that have differing goals and perspectives</li>
<li>Have experience translating user feedback and behavioural observations into coherent prompt changes and specifications</li>
<li>Care deeply about AI safety and making Claude a healthy alternative in the AI landscape</li>
</ul>
<p>Annual compensation range for this role is $300,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>prompt engineering, model behaviour, product quality, agentic coding tools, Python, Typescript, collaboration, incident response, process definition, evaluation best practices, AI safety, model training processes, agent scaffold architectures, behavioural evaluations, user feedback analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5159669008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f931591c-87a</externalid>
      <Title>Research Scientist, Frontier Risk Evaluations</Title>
      <Description><![CDATA[<p>As a Research Scientist focused on Frontier Risk Evaluations, you will design and create evaluation measures, harnesses and datasets for measuring the risks posed by frontier AI systems.</p>
<p>For example, you might do any or all of the following:</p>
<ul>
<li>Design and build harnesses to test AI models and systems (including agents) for dangerous capabilities such as security vulnerability exploitation, CBRN uplift, and other high-risk activities;</li>
</ul>
<ul>
<li>Work with government agencies or other labs to collectively scope and design evaluations to measure and mitigate risks posed by advanced AI systems;</li>
</ul>
<ul>
<li>Publish evaluation methodologies and write technical reports for policymakers.</li>
</ul>
<p>We are seeking talented researchers to join us in shaping this vision.</p>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>
</ul>
<ul>
<li>Practical experience conducting technical research collaboratively. You should be comfortable building and instrumenting ML pipelines, writing evaluation harnesses, and quickly turning new ideas from the research literature into working prototypes;</li>
</ul>
<ul>
<li>A track record of published research in machine learning, particularly in generative AI;</li>
</ul>
<ul>
<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>
</ul>
<ul>
<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience in crafting evaluations and benchmarks, or a background in data science roles related to LLM technologies;</li>
</ul>
<ul>
<li>Experience with red-teaming or adversarial testing of AI systems;</li>
</ul>
<ul>
<li>Familiarity with AI safety policy frameworks (e.g., NIST AI RMF, EU AI Act, Korea AI Basic Act).</li>
</ul>
<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you’re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn’t perfectly align with every requirement.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>machine learning, generative AI, ML pipelines, evaluation harnesses, AI safety policy frameworks, crafting evaluations and benchmarks, data science roles related to LLM technologies, red-teaming or adversarial testing of AI systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4677657005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1da2c245-26b</externalid>
      <Title>Communications Lead, The Anthropic Institute</Title>
      <Description><![CDATA[<p>We are seeking an experienced communications professional to serve as a dedicated communications partner to the Head of Public Benefit. This is a rare opportunity to work alongside an executive leading some of the most consequential research of our time,work on the economic impacts of AI, the implications of self-improving systems, the changing offense and defense balance, and the societal effects of powerful AI on real people and communities.</p>
<p>This role sits at the intersection of executive communications, brand strategy, and portfolio management. You will be the single point of contact and dedicated communications advisor for Jack Clark, managing day-to-day communications requests and strategic engagements across his full portfolio. Working closely with the Institute&#39;s communications, policy, and research teams, you will ensure that the Institute&#39;s work is coordinated, consistent, and lands with the audiences it deserves,far beyond the AI and policy communities.</p>
<p>Key Focus Areas:</p>
<p>Executive Communications and Brand Platform for Jack Clark</p>
<ul>
<li>Serve as the single point of contact and dedicated communications advisor to Jack Clark, Anthropic’s Head of Public Benefit and leader of The Anthropic Institute</li>
<li>Lead the creation and execution of Jack’s personal brand platform, defining how he shows up across media interviews, podcasts, speaking engagements, written commentary, and social media</li>
<li>Assist in preparing Jack for high-profile media engagements and public speaking, developing talking points, briefing materials, and post-engagement analysis</li>
<li>Identify and secure strategic media opportunities, panel placements, and speaking engagements that advance both Jack’s platform and the Institute’s mission</li>
</ul>
<p>Support for Anthropic Institute Communications</p>
<ul>
<li>Work with communications colleagues to develop an overarching communications strategy for The Anthropic Institute, tracking key areas of work across the Economic Index, Societal Impacts Research, and beyond</li>
<li>Work with members of the policy and editorial teams to design the Institute&#39;s publishing and content strategy, ensuring outputs reach the largest possible audience,whether through the Anthropic blog, the Institute&#39;s own site, microsites, interactive experiences, video, audio, or other formats</li>
<li>Translate complex research across economics, societal impacts, frontier red teaming, and AI safety into compelling public narratives that reach audiences beyond the AI and policy communities</li>
<li>Build and maintain relationships with media, researchers, and thought leaders across economics, labor, national security, and general interest outlets,not just technology press</li>
<li>Lead communications for select Institute publications and projects</li>
<li>Coordinate with Anthropic’s broader communications, editorial, and policy teams to keep messaging aligned while maintaining the Institute’s distinct voice and mission</li>
</ul>
<p>Responsibilities</p>
<ul>
<li>Serve as the single point of contact for Jack Clark across all communications needs, managing intake, prioritization, and day-to-day logistics for his portfolio</li>
<li>Coordinate across the Institute’s communications team, policy team, and editorial team to ensure Jack’s priorities are represented and his voice is consistent across all external moments</li>
<li>Lead the creation and ongoing execution of Jack’s brand platform across all channels</li>
<li>Translate technical research on economics, societal impacts, AI safety, and frontier capabilities into narratives accessible to diverse audiences, in partnership with the Institute’s research communications team</li>
<li>Prepare Jack and other Institute leaders for media interviews, podcast appearances, congressional hearings, and speaking engagements</li>
<li>Build and maintain relationships with media across technology, economics, national security, labor, and general interest verticals</li>
<li>Create high-quality content across formats,blog posts, research summaries, briefing documents, talking points, social content, and innovative digital formats</li>
<li>Build repeatable playbooks and processes that allow a lean team to punch well above its weight</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Have 10+ years of experience in communications, with significant depth in executive communications, thought leadership, or public interest/policy communications</li>
<li>Have a track record of translating complex, technical, or academic research into public narratives that reach audiences well beyond specialist communities</li>
<li>Can embrace the “weird” - AI is a new field where the unexpected and unusual happens all the time.</li>
<li>Have built and executed brand platforms for senior executives, thought leaders, or public intellectuals,and have managed their day-to-day communications logistics</li>
<li>Are equally comfortable crafting high-level messaging strategy and producing content under deadline</li>
<li>Have strong media relationships across a range of verticals,technology, economics, national security, policy, and/or general interest</li>
<li>Can move between strategic messaging and fast-turnaround tactical execution without losing quality</li>
<li>Are intellectually curious about AI’s impact on the economy, society, national security, and the future of work,and can engage substantively with researchers working on these problems</li>
<li>Are excited about using AI tools in your own workflows to multiply what a small team can do</li>
<li>Care deeply about AI safety, responsible technology development, and Anthropic’s public benefit mission</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$320,000 USD</Salaryrange>
      <Skills>communications, executive communications, brand strategy, portfolio management, media relations, public speaking, content creation, research translation, AI safety, responsible technology development, AI tools, data analysis, machine learning, natural language processing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>The Anthropic Institute</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>The Anthropic Institute is a public benefit corporation building some of the world&apos;s most powerful artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155269008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a1ecaac-284</externalid>
      <Title>Global Leader, Applied AI Architects, Beneficial Deployments</Title>
      <Description><![CDATA[<p>As the Global Leader of Applied AI Architects for Beneficial Deployments, you will lead a team of Applied AI Architects who serve as the primary technical partners to mission-driven organisations and non-profits adopting Claude. You&#39;ll build and scale a world-class, globally distributed team that turns frontier AI into real impact in education, global health, economic mobility, and life sciences.</p>
<p>You&#39;ll combine deep technical fluency with the leadership judgment needed to operate across segments, regions, and partner types,from global health foundations to leading research institutions to frontline non-profits. You&#39;ll set the vision for how we scale our expertise from a handful of flagship partnerships to an ecosystem of organisations operating as AI-native, and you&#39;ll be accountable for the team, processes, and cross-functional relationships that make that possible.</p>
<p>In collaboration with Beneficial Deployment’s Head of Nonprofits, Product, Engineering, Policy, and our broader GTM organisation, you&#39;ll help ensure our partners incorporate Claude into their work responsibly, effectively, and in ways that meaningfully accelerate their missions. You&#39;ll represent Anthropic as a senior technical leader on some of our most visible and consequential partnerships, while maintaining our best-in-class safety standards.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead, grow, and mentor a globally distributed team of Architects supporting mission-driven non-profits across education, global health, economic mobility, and life sciences</li>
</ul>
<ul>
<li>Set the vision, strategy, and operating model for how Applied AI shows up in Beneficial Deployments,from discovery through deployment, and from individual partnerships to ecosystem-wide infrastructure</li>
</ul>
<ul>
<li>Establish hiring plans, team structure, and career development paths as we scale the team globally; set goals and reviews that promote growth, output, and a high bar for technical excellence</li>
</ul>
<ul>
<li>Partner closely with segment leads and senior partner leadership to understand requirements and shape engagements on our highest-impact partnerships</li>
</ul>
<ul>
<li>Drive the design of cohort-based accelerators, Claude Code enablement programmes, and other scalable mechanisms that multiply our impact across many organisations simultaneously</li>
</ul>
<ul>
<li>Identify patterns across partners and segments to inform what we build at the ecosystem level,MCPs, evals, reference implementations, and shared infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product and Engineering to surface partner needs, influence roadmap, and ensure learnings from the field shape how Claude evolves</li>
</ul>
<ul>
<li>Represent Anthropic externally with senior leaders at foundations, non-profits, research institutions, and government-adjacent organisations</li>
</ul>
<ul>
<li>Travel to partner sites globally for workshops, technical deep dives, and relationship building</li>
</ul>
<ul>
<li>Help shape team processes and culture as Beneficial Deployments scales, and contribute to the broader Applied AI leadership community at Anthropic</li>
</ul>
<ul>
<li>Travel is 30-40% due to the global nature of the team (SF, NYC, London and Bengaluru) and events across Beneficial Deployments.</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>10+ years of experience in technical, customer-facing roles (Solutions Architect, Forward Deployed Engineer, Customer Engineer, Sales Engineer, or similar), with meaningful exposure to complex, high-stakes deployments</li>
</ul>
<ul>
<li>7+ years of engineering or technical leadership experience, preferably building and scaling customer-facing or forward-deployed teams globally</li>
</ul>
<ul>
<li>Experience working with or inside mission-driven organisations,education, healthcare, scientific research, global development, or non-profits,and a genuine understanding of the constraints, incentives, and operating realities of these sectors</li>
</ul>
<ul>
<li>Familiarity with common LLM implementation patterns, including prompt engineering, evaluation frameworks, agent frameworks, and retrieval systems; working knowledge of Python</li>
</ul>
<ul>
<li>A track record of building teams in ambiguous, fast-moving environments, and comfort wearing multiple hats as the team scales</li>
</ul>
<ul>
<li>Strong executive presence and the ability to foster deep, trusted relationships with senior partner leadership</li>
</ul>
<ul>
<li>Excellent communication, collaboration, and coaching abilities, with a love of teaching and helping others succeed</li>
</ul>
<ul>
<li>The ability to think holistically, identify core principles that translate across scenarios, and make ambiguous problems clear</li>
</ul>
<ul>
<li>A passion for making powerful technology safe and societally beneficial, and for thinking creatively about risks and benefits beyond existing playbooks</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience leading globally distributed teams across time zones and regions</li>
</ul>
<ul>
<li>Background in philanthropy, global health, education technology, or scientific research</li>
</ul>
<ul>
<li>Experience designing cohort-based or programmatic delivery models that scale technical expertise across many organisations</li>
</ul>
<ul>
<li>A working understanding of emerging research in agents, evaluations, and AI safety</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$315,000-$380,000 USD</Salaryrange>
      <Skills>Technical leadership, Team management, Communication, Collaboration, Coaching, Python, LLM implementation patterns, Prompt engineering, Evaluation frameworks, Agent frameworks, Retrieval systems, Experience leading globally distributed teams, Background in philanthropy, global health, education technology, or scientific research, Experience designing cohort-based or programmatic delivery models, Working understanding of emerging research in agents, evaluations, and AI safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5192104008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd02d1a1-0e8</externalid>
      <Title>Communications Lead, Claude Code</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Communications Lead to own comms for Claude Code. You&#39;ll sit on the Product Communications team, working day-to-day with the Claude Code product team, developer relations, and marketing.</p>
<p>The media landscape for developer tools doesn&#39;t look like it did five years ago. We need someone who understands both traditional press and the channels where developers form opinions. You might have come up through an in-house comms team, or you might have run launches inside product marketing, handled press from a DevRel role, or found your way to this work from somewhere adjacent.</p>
<p>You should be a Claude Code user yourself and know the product well.</p>
<p>Responsibilities:</p>
<ul>
<li>Own communications for Claude Code, from the big launches to the steady rhythm of updates, community moments, and everything in between</li>
<li>Build and maintain strong relationships with journalists, newsletter writers, podcasters, and creators covering dev tools and the AI ecosystem</li>
<li>Lead cross-functional product launch communications, coordinating messaging across comms, marketing, developer relations, and product</li>
<li>Advise leadership and DevRel when things move fast or catch fire, whether it’s an incident or a community thread</li>
<li>Translate complex technical work into stories that land with developers and still make sense to broader audiences</li>
<li>Develop messaging frameworks and content strategies that work across technical and non-technical audiences</li>
<li>Prepare Claude Code engineers and product leads for external moments: podcasts, talks, press, etc.</li>
<li>Think across channels (press, social, community, owned) and know which lever to pull for each moment</li>
<li>Pay attention to what&#39;s actually working and build the program from there</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8–12 years of experience in communications, PR, or developer marketing, with meaningful time focused on technical products or developer audiences</li>
<li>Use Claude Code heavily and can talk specifically about how you use it in your day-to-day</li>
<li>Are high-agency and low-ego, with a bias to action</li>
<li>Write clearly and concisely, whether it&#39;s a launch post or a cross-functional update, a lot of context moves through this role and people need to be able to follow it</li>
<li>Have a deep understanding of both traditional media channels and the emerging platforms where technical communities engage</li>
<li>Are very online, follow the right people, know what&#39;s moving through Hacker News and developer social chatter, and catch things early</li>
<li>Have real fluency in developer culture and know how trust gets earned there</li>
</ul>
<p>Strong candidates may also</p>
<ul>
<li>Have experience at developer tools companies, infrastructure products, or open source projects</li>
<li>Have an existing network in developer media, technical journalism, or the creator space</li>
<li>Have experience managing communications for AI or ML products</li>
</ul>
<p>The annual compensation range for this role is $185,000-$255,000 USD.</p>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000-$255,000 USD</Salaryrange>
      <Skills>communications, PR, developer marketing, technical products, developer audiences, AI, ML, GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, Learning from Human Preferences</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153586008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fab21c7e-6bf</externalid>
      <Title>Research Engineer / Scientist, Alignment Science - London</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>You will contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems. As a Research Engineer on Alignment Science, you&#39;ll work on creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</p>
<p>Responsibilities:</p>
<ul>
<li>Conduct research on AI control and alignment stress-testing</li>
<li>Develop and implement new techniques for ensuring AI safety</li>
<li>Collaborate with other teams, including Interpretability, Fine-Tuning, and the Frontier Red Team</li>
<li>Test and evaluate the effectiveness of AI safety techniques</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software, ML, or research engineering experience</li>
<li>Familiarity with technical AI safety research</li>
<li>Experience contributing to empirical AI research projects</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Experience authoring research papers in machine learning, NLP, or AI safety</li>
<li>Experience with LLMs</li>
<li>Experience with reinforcement learning</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
</ul>
<p>Note:</p>
<p>This role requires all candidates to be based at least 25% in London and travel to San Francisco occasionally.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£260,000-£370,000 GBP</Salaryrange>
      <Skills>software engineering, machine learning, research engineering, AI safety, technical AI safety research, research paper authoring, LLMs, reinforcement learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4610158008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>536aa8eb-f7c</externalid>
      <Title>Technical Influence Operations Threat Investigator</Title>
      <Description><![CDATA[<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation.</p>
<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>
<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>
<p>Responsibilities:</p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behavior, astroturfing, and narrative manipulation campaigns</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover coordinated networks of threat actors conducting influence operations</li>
</ul>
<ul>
<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioral clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>
</ul>
<ul>
<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>
</ul>
<ul>
<li>Monitor and analyze state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>
</ul>
<ul>
<li>Forecast how advances in AI technology,including improved content generation, voice synthesis, and multimodal capabilities,will reshape the influence operations landscape and inform safety-by-design strategies</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>
</ul>
<ul>
<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<ul>
<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>
</ul>
<ul>
<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>
</ul>
<ul>
<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>
</ul>
<ul>
<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>
</ul>
<ul>
<li>Familiarity with social network analysis techniques and tools for mapping coordinated behavior</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$230,000-$290,000 USD</Salaryrange>
      <Skills>Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare, Proficiency in SQL and Python for data analysis and threat detection, Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations, Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations, Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems, Experience at a major technology platform working on influence operations, platform integrity, or content authenticity, Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts, Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions, Familiarity with social network analysis techniques and tools for mapping coordinated behavior, Background in AI safety, machine learning security, or technology abuse investigation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5140239008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>051843ef-f93</externalid>
      <Title>Vendor and Contract Manager, Safeguards</Title>
      <Description><![CDATA[<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships. This includes identifying and selecting vendors, contract negotiation, onboarding, ongoing performance management, and renewal.</p>
<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>
<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>
<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>
<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>
<p>Responsibilities:</p>
<p>Vendor Selection &amp; Onboarding - Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</p>
<p>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</p>
<p>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</p>
<p>Contract &amp; Budget Management - Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</p>
<p>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</p>
<p>Handle invoicing, payment, and renewal processes with partners</p>
<p>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</p>
<p>Ongoing Vendor &amp; Partner Management - Manage vendor and researcher access to models and products during testing phases and trials</p>
<p>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</p>
<p>Report on vendor performance, spend, and contract status to Safeguards leadership</p>
<p>You may be a good fit if you have:</p>
<p>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</p>
<p>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</p>
<p>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</p>
<p>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</p>
<p>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</p>
<p>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</p>
<p>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</p>
<p>Nice to have:</p>
<p>Experience in AI safety or AI-adjacent vendor ecosystems</p>
<p>Familiarity with procurement tools such as Ironclad or Zip</p>
<p>Annual compensation range for this role is $245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, verification vendors, threat intelligence providers, content screening tools, Ironclad, Zip, research collaboration, civil society consultation, model red-teaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156596008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da06ef8d-890</externalid>
      <Title>Vendor and Contract Manager, Safeguards</Title>
      <Description><![CDATA[<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships , from identifying and selecting vendors through contract negotiation, onboarding, ongoing performance management, and renewal.</p>
<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>
<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>
<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>
<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Vendor Selection &amp; Onboarding: Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</li>
<li>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</li>
<li>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</li>
<li>Contract &amp; Budget Management: Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</li>
<li>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</li>
<li>Handle invoicing, payment, and renewal processes with partners</li>
<li>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</li>
<li>Ongoing Vendor &amp; Partner Management: Manage vendor and researcher access to models and products during testing phases and trials</li>
<li>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</li>
<li>Report on vendor performance, spend, and contract status to Safeguards leadership</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</li>
<li>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</li>
<li>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</li>
<li>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</li>
<li>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</li>
<li>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</li>
<li>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience in AI safety or AI-adjacent vendor ecosystems</li>
<li>Familiarity with procurement tools such as Ironclad or Zip</li>
</ul>
<p><strong>Logistics:</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, cross-functional collaboration, ambiguity tolerance, fast-paced environments, AI safety vendor ecosystems, procurement tools, Ironclad, Zip</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156596008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a922c6ae-3c1</externalid>
      <Title>Technical CBRN-E  Threat Investigator</Title>
      <Description><![CDATA[<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>
<p>You will work at the intersection of AI safety and CBRN security, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against threat actors who may attempt to leverage our AI technology for developing weapons, synthesizing dangerous compounds, or creating biological harm.</p>
<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>
<p>Responsibilities:</p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>
</ul>
<ul>
<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>
</ul>
<ul>
<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>
</ul>
<ul>
<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<ul>
<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>
</ul>
<p>Strong candidates may also have</p>
<ul>
<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>
</ul>
<ul>
<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>
</ul>
<ul>
<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230,000-$290,000 USD</Salaryrange>
      <Skills>SQL, Python, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology misuse, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066997008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86d4c902-c89</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, with an initial focus on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis tools, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, non-consensual intimate imagery, commercial sexual exploitation, NGO and industry ecosystem working on these harms, open-source investigations or threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d742066-b4f</externalid>
      <Title>Anthropic Fellows Program — AI Safety</Title>
      <Description><![CDATA[<p>The Anthropic Fellows Program is a 4-month full-time research opportunity designed to foster AI research and engineering talent. As a fellow, you will work on an empirical project aligned with our research priorities, with the goal of producing a public output. You will have direct mentorship from Anthropic researchers, access to a shared workspace, and connection to the broader AI safety and security research community. The expected base stipend for this role is $3,850 USD per week, with an expectation of 40 hours per week for 4 months.</p>
<p>The program is open to individuals with a strong technical background in computer science, mathematics, or physics, and who are motivated by making sure AI is safe and beneficial for society as a whole. You will be part of a diverse team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>As a fellow, you will have the opportunity to work on projects in select AI safety research areas, such as scalable oversight, adversarial robustness and AI control, model organisms, model internals/mechanistic interpretability, and AI welfare. You will also have access to our Alignment Science and Frontier Red Team blogs, which feature past projects and research directions.</p>
<p>To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program. We are not currently able to sponsor visas for fellows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$3,850 USD per week</Salaryrange>
      <Skills>Python programming, Fluent in Python, Strong technical background in computer science, mathematics, or physics, Experience in areas of research or engineering related to AI safety, Experience working with large language models, Track record of open-source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5183044008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ee086b6f-d4e</externalid>
      <Title>Capital Markets &amp; Investor Relations</Title>
      <Description><![CDATA[<p>As a key member of the Capital Markets &amp; Corporate Development team at Anthropic, you&#39;ll play a central role in shaping our financial strategy and capital structure during a critical period of growth.</p>
<p>You&#39;ll lead capital raising initiatives, manage investor relationships, and help prepare Anthropic for the next phase of its evolution as a company. Working closely with our leadership team, you&#39;ll help ensure Anthropic has the financial resources and strategic partnerships needed to fulfill our mission of building reliable, interpretable, and steerable AI systems.</p>
<p>In this role, you&#39;ll leverage your expertise across capital markets and financial strategy to drive fundraising activities, build robust investor relations frameworks, and lay the groundwork for long-term financial flexibility. You&#39;ll also support selective corporate development opportunities that align with our strategic priorities.</p>
<p>The ideal candidate brings deep capital markets experience, strong analytical capabilities, and exceptional relationship-building skills to help guide Anthropic through its next phase of growth while maintaining our commitment to responsible AI development.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead capital raising processes, working with executive leadership to determine timing, structure, and terms for potential financing rounds</li>
</ul>
<ul>
<li>Build and maintain relationships with existing and potential investors across institutional, strategic, and financial investor bases</li>
</ul>
<ul>
<li>Develop comprehensive investor relations strategies, including communications, reporting frameworks, and engagement plans</li>
</ul>
<ul>
<li>Help build financial infrastructure and reporting capabilities to support institutional-grade transparency and governance</li>
</ul>
<ul>
<li>Track and analyze market conditions, comparable transactions, and valuation benchmarks to inform capital strategy</li>
</ul>
<ul>
<li>Identify and evaluate strategic investment opportunities and M&amp;A transactions aligned with Anthropic&#39;s mission</li>
</ul>
<ul>
<li>Create detailed financial models, valuation analyses, and market research to support strategic decision-making</li>
</ul>
<ul>
<li>Prepare and present recommendations to leadership and the board on capital structure and financing strategies</li>
</ul>
<ul>
<li>Collaborate with Finance, Legal, and Comms teams to align financial and strategic initiatives with organizational priorities</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in investment banking, equity capital markets, private equity, venture capital, or similar roles with significant capital markets exposure</li>
</ul>
<ul>
<li>Possess deep knowledge of capital markets, financial instruments, transaction structures, and institutional investor perspectives</li>
</ul>
<ul>
<li>Have a proven track record of successfully executing capital raises or advising on financing transactions</li>
</ul>
<ul>
<li>Demonstrate exceptional financial modeling and analytical capabilities</li>
</ul>
<ul>
<li>Are a strategic thinker who can connect financial decisions to long-term organizational goals</li>
</ul>
<ul>
<li>Have excellent communication skills and can effectively engage with diverse stakeholders including investors, executives, and technical teams</li>
</ul>
<ul>
<li>Thrive in fast-paced environments and can manage multiple complex projects simultaneously</li>
</ul>
<ul>
<li>Show sound judgment when evaluating risks and opportunities in ambiguous situations</li>
</ul>
<ul>
<li>Are passionate about AI safety and align with Anthropic&#39;s mission to develop AI systems that are reliable, interpretable, and steerable</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience in technology or AI-related industries</li>
</ul>
<ul>
<li>Bring experience from companies that have scaled through major growth transitions or prepared for significant capital markets events</li>
</ul>
<ul>
<li>Possess advanced degrees in finance, business, or related fields</li>
</ul>
<ul>
<li>Have worked with both private and public companies, understanding the requirements and expectations at different stages</li>
</ul>
<ul>
<li>Demonstrate knowledge of AI research and development landscapes</li>
</ul>
<ul>
<li>Show intellectual curiosity about the technical aspects of AI safety and alignment</li>
</ul>
<ul>
<li>Have a strong professional network in relevant investment communities</li>
</ul>
<p>Bring experience working in high-growth, mission-driven organizations</p>
<p>Annual compensation range for this role is $250,000-$310,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$250,000-$310,000 USD</Salaryrange>
      <Skills>Investment banking, Equity capital markets, Private equity, Venture capital, Financial modeling, Analytical capabilities, Relationship-building skills, Capital raising, Investor relations, Financial infrastructure, Reporting capabilities, Market analysis, Valuation benchmarks, Strategic investment opportunities, M&amp;A transactions, Financial models, Valuation analyses, Market research, AI safety, Responsible AI development, High-growth organizations, Mission-driven organizations, Technology or AI-related industries, Advanced degrees in finance, business, or related fields, Strong professional network in relevant investment communities</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5116167008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e48ec86-b97</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p><strong>Current Project Areas</strong></p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Ambiguity and iteration, Genuine curiosity, Passion for AI safety, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Secure enclaves, TPMs, Confidential computing primitives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e03253e3-c7f</externalid>
      <Title>Safeguards Analyst, Human Exploitation &amp; Abuse</Title>
      <Description><![CDATA[<p>As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.</p>
<p>You will be a member of the user well-being team, and your initial focus will be on standing up detection, review, and escalation workflows for this domain , from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.</p>
<p>This position may later expand to include broader areas of user well-being enforcement. Safety is core to our mission, and you&#39;ll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.</p>
<p>In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy</li>
</ul>
<ul>
<li>Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas</li>
</ul>
<ul>
<li>Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces</li>
</ul>
<ul>
<li>Conduct deep-dive investigations into suspected exploitation activity , using SQL and other data analysis tools to surface threat patterns and bad-actor behavior in large datasets , then produce clear, well-sourced intelligence reports that inform detection strategy and surface policy gaps to the Safeguards policy design team</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem , including evolving trafficking and sextortion tactics , to anticipate how AI systems could be misused for exploitation as capabilities advance</li>
</ul>
<ul>
<li>Review and investigate flagged content to drive enforcement decisions and policy improvements, exercising careful judgment on the line between permitted adult content and exploitative material</li>
</ul>
<ul>
<li>Build and maintain relationships with external intelligence partners , including hotlines, NGOs, and industry hash-sharing consortia , to inform our approach and enable appropriate real-world escalation</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of experience in trust and safety, content moderation, counter-exploitation work, or a related field</li>
</ul>
<ul>
<li>Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation</li>
</ul>
<ul>
<li>Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization</li>
</ul>
<ul>
<li>Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations</li>
</ul>
<ul>
<li>Demonstrated ability to analyze complex situations and make well-reasoned decisions under pressure</li>
</ul>
<ul>
<li>Sound judgment in distinguishing permitted content from exploitative content, and comfort working in areas where these lines require careful reasoning</li>
</ul>
<ul>
<li>Strong attention to detail and ability to maintain accurate documentation</li>
</ul>
<ul>
<li>Ability to collaborate with team members while navigating rapidly evolving priorities and workstreams</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)</li>
</ul>
<ul>
<li>Experience conducting open-source investigations or threat actor profiling in a trust &amp; safety, intelligence, or law enforcement context</li>
</ul>
<ul>
<li>Experience working with generative AI products, including writing effective prompts for content review and enforcement</li>
</ul>
<ul>
<li>A deep interest in AI safety and responsible technology development</li>
</ul>
<ul>
<li>Experience standing up real-world harm escalation pathways or working with law enforcement referral processes</li>
</ul>
<p><strong>Compensation:</strong></p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>trust and safety, content moderation, counter-exploitation work, SQL, Python, data analysis, detection and review workflows, sensitive content, human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse, commercial sexual exploitation, NGO and industry ecosystem, open-source investigations, threat actor profiling, generative AI products, AI safety and responsible technology development, real-world harm escalation pathways</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156333008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0287c3-e30</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams.</p>
<p>This work sits at the intersection of AI capabilities research, cybersecurity, and policy,what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats. This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
<li>Are driven to find solutions to complex, high-stakes problems</li>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
<li>Have strong software engineering skills, particularly in Python</li>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
<li>Thrive in collaborative environments</li>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
<li>Research or professional experience applying LLMs to security problems</li>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
<li>Experience building security tools or automation</li>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions and workshops.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, software engineering, Python, AI safety, threat modeling, offensive security research, vulnerability research, exploit development, research or professional experience applying LLMs to security problems, competitive CTFs, bug bounties, security tools or automation, demos or prototypes, external stakeholders, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c5ac3a-e98</externalid>
      <Title>Research Engineer / Scientist, Alignment Science</Title>
      <Description><![CDATA[<p>You will contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems. Your work will involve building and running elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems.</p>
<p>As a Research Engineer on Alignment Science, you&#39;ll collaborate with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team. Your responsibilities will include testing the robustness of our safety techniques, running multi-agent reinforcement learning experiments, building tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks, and contributing ideas, figures, and writing to research papers, blog posts, and talks.</p>
<p>You may be a good fit if you have significant software, ML, or research engineering experience, have some experience contributing to empirical AI research projects, and have some familiarity with technical AI safety research. Strong candidates may also have experience authoring research papers in machine learning, NLP, or AI safety, have experience with LLMs, have experience with reinforcement learning, and have experience with Kubernetes clusters and complex shared codebases.</p>
<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>machine learning, research engineering, AI safety, Python, Kubernetes, LLMs, reinforcement learning, authoring research papers, NLP, AI safety research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4631822008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e850d882-42f</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p>As a Research Engineer on our Post-Training team, you&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies.</p>
<p>You&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>We conduct all interviews in Python, and this role may require responding to incidents on short-notice, including on weekends.</p>
<p>Responsibilities:</p>
<p>Implement and optimize post-training techniques at scale on frontier models</p>
<p>Conduct research to develop and optimize post-training recipes that directly improve production model quality</p>
<p>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</p>
<p>Develop tools to measure and improve model performance across various dimensions</p>
<p>Collaborate with research teams to translate emerging techniques into production-ready implementations</p>
<p>Debug complex issues in training pipelines and model behavior</p>
<p>Help establish best practices for reliable, reproducible model post-training</p>
<p>You may be a good fit if you:</p>
<p>Thrive in controlled chaos and are energized, rather than overwhelmed, when juggling multiple urgent priorities</p>
<p>Adapt quickly to changing priorities</p>
<p>Maintain clarity when debugging complex, time-sensitive issues</p>
<p>Have strong software engineering skills with experience building complex ML systems</p>
<p>Are comfortable working with large-scale distributed systems and high-performance computing</p>
<p>Have experience with training, fine-tuning, or evaluating large language models</p>
<p>Can balance research exploration with engineering rigor and operational reliability</p>
<p>Are adept at analyzing and debugging model training processes</p>
<p>Enjoy collaborating across research and engineering disciplines</p>
<p>Can navigate ambiguity and make progress in fast-moving research environments</p>
<p>Strong candidates may also:</p>
<p>Have experience with LLMs</p>
<p>Have a keen interest in AI safety and responsible deployment</p>
<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems.</p>
<p>However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>
<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, ML systems, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4613592008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f5d92fd6-e21</externalid>
      <Title>Prompt Engineer, Agent Prompts &amp; Evals</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re looking for prompt and context engineers to join our product engineering team to help build AI-first products, features, and evaluations. Your mission will be to bridge the gap between model capabilities and real product experience, working with product teams to build consistent, safe, and beneficial user experiences across all product surfaces.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, test, and optimize system prompts and feature-specific prompts that shape Claude&#39;s behavior across consumer and API products.</li>
<li>Build and maintain comprehensive evaluation suites that ensure model quality and consistency across product launches and updates.</li>
<li>Partner closely with product teams, research teams, and safeguards to ensure new features meet quality and safety standards.</li>
<li>Play a critical role in model releases, ensuring smooth rollouts and catching regressions before they impact users.</li>
<li>Help build and improve the frameworks and tools that allow teams to develop and test prompts and features with confidence.</li>
<li>Mentor product engineers on prompt engineering best practices and help teams build their first evaluations.</li>
<li>Work in a fast-paced environment where model capabilities advance daily, requiring quick adaptation and creative problem-solving.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>5+ years of software engineering experience with Python or similar languages.</li>
<li>Demonstrated experience with LLMs and prompt engineering (through work, research, or significant personal projects).</li>
<li>Strong understanding of evaluation methodologies and metrics for AI systems.</li>
<li>Excellent written and verbal communication skills – you&#39;ll need to explain complex model behaviors to diverse stakeholders.</li>
<li>Ability to manage multiple concurrent projects and prioritize effectively.</li>
<li>Experience with version control, CI/CD, and modern software development practices.</li>
</ul>
<p><strong>You Might Thrive in This Role If You…</strong></p>
<ul>
<li>Get excited about the nuances of how language models behave and love finding creative ways to improve their outputs.</li>
<li>Enjoy being at the intersection of research and product, translating cutting-edge capabilities into user value.</li>
<li>Are comfortable with ambiguity and can define success metrics for novel AI features.</li>
<li>Have a strong sense of ownership and drive projects from conception to production.</li>
<li>Are passionate about building AI systems that are helpful, harmless, and honest.</li>
<li>Thrive in collaborative environments and enjoy teaching others.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Python, LLMs, Prompt engineering, Evaluation methodologies, Metrics for AI systems, Version control, CI/CD, Modern software development practices, Claude, A/B testing, Experimentation frameworks, AI safety, Alignment considerations, Building tools and infrastructure for ML/AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107121008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fdad2d9-8f4</externalid>
      <Title>Member of Technical Staff - International Government</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We&#39;re looking for a highly skilled Member of Technical Staff to join our team at xAI. As a key member of our team, you will design, build, and optimize integrations between xAI&#39;s frontier models and international government systems, platforms, and data environments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and optimize integrations between xAI&#39;s frontier models and international government systems, platforms, and data environments</li>
<li>Develop secure, scalable solutions for use cases such as policy analysis, edtech, scientific research support, public health modeling, regulatory workflows, and citizen-facing services across diverse global contexts</li>
<li>Collaborate on custom SDKs, APIs, developer tools, and documentation tailored for international government and enterprise developers</li>
<li>Partner with international agency stakeholders to understand requirements, prototype solutions, and iterate rapidly based on real-world feedback, including during on-site assignments</li>
<li>Contribute to safe deployment practices, including red-teaming, bias evaluation, output filtering, and explainability features for high-stakes non-classified applications in varied regulatory landscapes</li>
<li>Fine-tune and adapt xAI models for specific international government use cases, incorporating custom guardrails and evaluation frameworks to ensure alignment with mission objectives and ethical guidelines</li>
<li>Ship production-grade code and features with a bias toward speed, simplicity, and measurable impact</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>4+ years of hands-on software engineering experience building scalable systems, APIs, or AI/ML applications (strong Python proficiency required; other languages a plus)</li>
<li>Experience fine-tuning AI models for government or mission-critical use cases, including building evaluations and ensuring safety and performance</li>
<li>Experience deploying complex AI and data systems in sovereign environments, ensuring compliance with international regulations for technology and AI in government or public sector settings</li>
<li>Willingness and ability to undertake travel and international assignments to regions such as the Americas, Asia, and the Middle East and potentially more.</li>
<li>Strong product sensibility: ability to translate ambiguous stakeholder needs into concrete technical solutions</li>
<li>Demonstrated ability to write clean, maintainable, high-performance code under tight timelines</li>
<li>Exceptional problem-solving skills and intellectual curiosity,you thrive on hard, ambiguous challenges</li>
<li>Excellent communication skills; you can explain complex technical concepts to non-technical partners clearly and concisely</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Prior work on AI safety, governance, red-teaming, or responsible AI deployment</li>
<li>Experience with cloud platforms (AWS, GCP, Azure), containerization (Docker/Kubernetes), or API orchestration</li>
<li>Background in policy-adjacent technical roles, civic tech, or public-interest technology with an international focus</li>
<li>Contributions to open-source AI projects or developer tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Software Engineering, AI/ML, Cloud Platforms, Containerization, API Orchestration, Policy Analysis, Edtech, Scientific Research Support, Public Health Modeling, Regulatory Workflows, Citizen-Facing Services, AI Safety, Governance, Red-Teeaming, Responsible AI Deployment, Policy-Adjacent Technical Roles, Civic Tech, Public-Interest Technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems for understanding the universe and aiding humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5074110007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b0c17b4f-3f4</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the role</p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>Responsibilities</p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
<li>Develop tools to measure and improve model performance across various dimensions</li>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
<li>Debug complex issues in training pipelines and model behavior</li>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
<li>Adapt quickly to changing priorities</li>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
<li>Have strong software engineering skills with experience building complex ML systems</li>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
<li>Are adept at analyzing and debugging model training processes</li>
<li>Enjoy collaborating across research and engineering disciplines</li>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with LLMs</li>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Software engineering, Complex ML systems, LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112018008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e65fbd6-0d0</externalid>
      <Title>Product Manager, API Growth</Title>
      <Description><![CDATA[<p>As a Product Manager for API Growth, you&#39;ll drive growth for the Claude Platform - the surface where developers and businesses build directly on Claude. Your efforts will span the entire funnel, driving growth of key API metrics across acquisition, activation, and monetization. You&#39;ll work closely with a cross-functional team of engineers, designers, marketers, and data scientists to develop and execute strategies that accelerate our growth while maintaining our commitment to safety and beneficial AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and execute a product strategy focused on driving Claude Platform growth metrics such as acquisition, activation, monetization</li>
<li>Lead the ideation, development, and launch of growth features across our API product - onboarding, console, docs, billing, and self-serve upgrade paths</li>
<li>Analyze product metrics and user feedback to identify opportunities and optimize performance</li>
<li>Collaborate with engineering, design, developer relations, and marketing teams to deliver high-impact growth initiatives</li>
<li>Conduct user research to understand customer needs and pain points</li>
<li>Define and track key performance indicators (KPIs) for API growth</li>
<li>Balance rapid iteration with our commitment to AI safety and ethics</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>6+ years of product management experience, majority in growth focused roles</li>
<li>Experience driving growth for API, developer platform, or technical / usage-based products</li>
<li>Strong analytical skills, and experience with A/B testing and funnel optimization</li>
<li>Excellent communication and stakeholder management skills</li>
<li>Ability to thrive in a fast-paced, ambiguous environment</li>
<li>Passion for AI technology and its potential impact on society</li>
<li>Technical background or ability to work effectively with engineering teams</li>
<li>Founder experience is a plus</li>
</ul>
<p>The annual compensation range for this role is $305,000-$385,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$305,000-$385,000 USD</Salaryrange>
      <Skills>product management, API growth, cross-functional team collaboration, data analysis, user research, AI safety and ethics, A/B testing, funnel optimization, technical background, founder experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5181852008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA; San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b1d2ddf-22f</externalid>
      <Title>Product Manager, Monetization</Title>
      <Description><![CDATA[<p>As a Product Manager for Monetization, you&#39;ll drive revenue growth across our product suite spanning Claude.ai, Claude Code, Claude Cowork, and Claude Platform. You&#39;ll work closely with other growth teams that are focused on driving growth for specific audiences. You&#39;ll also partner with a cross-functional team of engineers, designers, marketers, and data scientists to develop and execute strategies that accelerate our growth while maintaining our commitment to safety and beneficial AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and execute a product strategy focused on increasing revenue growth across our suite of products</li>
<li>Drive monetization efforts spanning the likes of pricing &amp; packaging, free-to-paid conversion, and consumption-based billing</li>
<li>Analyze product metrics and user feedback to identify monetization opportunities and optimize performance</li>
<li>Collaborate with engineering, design, and marketing teams to deliver high-impact growth initiatives</li>
<li>Conduct user research to understand customer needs and pain points</li>
<li>Define and track key performance indicators (KPIs) for growth initiatives</li>
<li>Balance rapid iteration with our commitment to AI safety and ethics</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>6+ years of product management experience, majority in growth focused roles</li>
<li>Experience working on growth for mass-scale subscription businesses</li>
<li>Direct experience owning monetization – pricing, packaging, paid conversion, or upgrade funnels</li>
<li>Strong analytical skills, and experience with A/B testing and funnel optimization</li>
<li>Excellent communication and stakeholder management skills</li>
<li>Ability to thrive in a fast-paced, ambiguous environment</li>
<li>Passion for AI technology and its potential impact on society</li>
<li>Technical background or ability to work effectively with engineering teams</li>
<li>Founder experience is a plus</li>
</ul>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$305,000-$385,000 USD</Salaryrange>
      <Skills>product management, growth focused roles, monetization, pricing &amp; packaging, free-to-paid conversion, consumption-based billing, A/B testing, funnel optimization, user research, key performance indicators, AI safety and ethics, technical background, engineering teams, founder experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153773008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b7e7a6e-638</externalid>
      <Title>Product Manager, Information Quality, Frontier AI</Title>
      <Description><![CDATA[<p>As a product manager at Google DeepMind, you will be responsible for shipping cutting-edge AI research breakthroughs into Google&#39;s AI models and products. You will work with brilliant AI researchers who are focused on making the biggest advances across new model architectures, AGI, and advanced applications like Agents and Robotics.</p>
<p>Your job will be to understand the implications of generative AI on the broader societal information ecosystem. You will tackle critical challenges surrounding misinformation, advance the detection and provenance of genAI content, and analyze the shifting economics of information in a society augmented by AI.</p>
<p>Key responsibilities:</p>
<ul>
<li>Understand the societal impact of new generative models and drive research breakthroughs in information quality, provenance, and trust.</li>
<li>Build collaborations with product teams across Google (e.g., Search, YouTube, News) and model development teams to ship cutting-edge genAI detection and misinformation mitigation technologies.</li>
<li>Work with researchers to envision, develop, and ship new agentic tools aimed at enhancing human information literacy and critical understanding of AI-generated content.</li>
<li>Investigate and model the economics of information to ensure sustainable, high-quality knowledge creation and dissemination in an AI-abundant future.</li>
<li>Evangelize research in AI safety and information integrity, making technical breakthroughs as self-serve and platformized as possible to scale across Google products.</li>
<li>Sometimes, incubate entirely new products or frameworks focused on societal resilience and trust.</li>
</ul>
<p>In order to set you up for success as a Product Manager at DeepMind, we look for the following skills and experience:</p>
<ul>
<li>BSc, MSc or PhD in Computer Science, Information Science, Economics, or a related field.</li>
<li>Deep understanding of AI models and their implications for content generation and dissemination.</li>
<li>Demonstrated experience in building AI products rooted in trust &amp; safety, information retrieval, content provenance, or misinformation detection.</li>
<li>A rich understanding of the current societal challenges posed by generative AI, including the economics of the web and digital information ecosystems.</li>
<li>Ability to stay up-to-speed on all relevant research in synthetic content detection, watermarking, and agentic systems.</li>
<li>Ability to build clear visions and strategies for translating complex societal challenges into scalable product features or tools.</li>
<li>Strong ability to communicate complex technical and societal concepts with great simplicity.</li>
<li>Very strong ability to collaborate in high stakes environments with multiple senior stakeholders, policy experts, and technical leads.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$240,000 USD - $334,000 USD + bonus + equity + benefits</Salaryrange>
      <Skills>AI models, Generative AI, Information quality, Provenance, Trust, Misinformation detection, Economics of information, AI safety, Information integrity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>DeepMind is a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7646114</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0806749e-694</externalid>
      <Title>Engineering Manager, Agent Prompts &amp; Evals</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is looking for an Engineering Manager to lead the Agent Prompts &amp; Evals team. This team owns the infrastructure that lets Anthropic ship model and prompt changes with confidence , the eval frameworks, system prompt pipelines, and regression-detection systems that every model launch depends on.</p>
<p>When a new Claude model is ready to ship, this team is the one answering “is it actually better in our products?” When a product team wants to change how Claude behaves, this team owns the tooling that tells them whether they broke something. It’s a platform team whose platform is model behavior itself.</p>
<p>The team sits deliberately at the seam between product engineering and research. You’ll partner closely with other evals groups across the company on shared infrastructure and methodology, with product teams who are shipping features on top of Claude, and with the TPMs and research PMs driving model launches. The pace is set by the model release cadence, and the team operates as both a platform owner and a hands-on partner during launch periods.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead and grow a team of prompt engineers and platform software engineers</li>
<li>Own the product-side eval platform: the frameworks, dashboards, bulk runners, and CI integrations that product teams use to measure Claude’s behavior and catch regressions before they ship</li>
<li>Own system prompt infrastructure: versioning, deployment, rollback, and review tooling for the prompts that run in production across claude.ai, the API, and agentic surfaces</li>
<li>Be a steady hand through model launches , these are the team’s highest-stakes operational moments and the EM is the backstop when things get chaotic</li>
<li>Build durable collaboration with other evals groups across the company; this means real work on ownership boundaries, shared roadmaps, and avoiding tragedy-of-the-commons on shared eval infrastructure</li>
<li>Recruit, close, and retain engineers who want to work at the intersection of product engineering and model behavior</li>
<li>Shape where the team invests next: there are credible paths into frontier eval development, model launch automation, and deeper prompt engineering support, and part of the job is sequencing them</li>
<li>Push the team toward measuring things that are hard to measure , behavioral drift, prompt quality, harness parity , not just things that are easy</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years in software engineering with 3+ years managing engineering teams, including experience leading a platform, infra, or developer-tooling team where your customers were other engineers</li>
<li>A track record of building “pits of success” , tooling and process that made it easy for other teams to do the right thing without needing to understand all the details</li>
<li>Comfort managing a team with a mixed charter: platform ownership, service-to-other-teams, and a launch-driven operational rhythm, all at once</li>
<li>Enough technical depth to engage on system design, review pipeline architecture, and be credible in debates with strong ICs , you don’t need to be writing code by hand every day, but you should be able to read it, review it, and be comfortable leveraging Claude to understand, design, and occasionally build.</li>
<li>A product mindset and willingness to wear multiple hats when the work calls for it</li>
<li>Demonstrated ability to build and maintain peer relationships with partner orgs that have different cultures and incentives , negotiating ownership, aligning roadmaps, and holding ground when it matters without being territorial about it</li>
<li>Experience recruiting and closing senior ICs in a competitive market</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior exposure to LLM evals, ML experimentation platforms, or model quality work , even tangentially</li>
<li>Experience with A/B testing infrastructure, feature flagging, or gradual rollout systems</li>
<li>Background in devtools, CI/CD platforms, or testing infrastructure at scale</li>
<li>A history of managing teams that sit between two larger orgs and making that position an asset rather than a liability</li>
<li>Interest in AI safety and alignment , not required, but it makes the “why” of the work land harder</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>Software engineering, Team management, Platform ownership, Service-to-other-teams, Launch-driven operational rhythm, System design, Pipeline architecture, Product mindset, Peer relationships, Recruiting and closing senior ICs, LLM evals, ML experimentation platforms, Model quality work, A/B testing infrastructure, Feature flagging, Gradual rollout systems, Devtools, CI/CD platforms, Testing infrastructure, AI safety and alignment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5159608008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5fca34aa-ab7</externalid>
      <Title>Enterprise Account Executive, Federal Partners Sales</Title>
      <Description><![CDATA[<p>As a Federal Partners Account Executive at Anthropic, you&#39;ll drive revenue by selling our safe, frontier AI solutions directly to Systems Integrators (SI) and Independent Software Vendors (ISV) in the public sector space.</p>
<p>You&#39;ll focus on selling directly to partners to ensure Anthropic&#39;s AI capabilities are delivered within their own solutions and service offerings. Working closely with GTM, product, and marketing teams, you&#39;ll help these partners understand and implement our technology while driving significant revenue growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Win new business and drive revenue for Anthropic by directly selling to Systems Integrators and ISVs in the public sector space, owning the full sales cycle from prospecting through close</li>
<li>Identify net-new revenue by selling to SIs with prime contracts, helping them integrate AI into their technology stack and consulting practices to differentiate their offerings, accelerate delivery, and win more competitive bids</li>
<li>Navigate complex technical sales conversations with partners&#39; engineering and product teams</li>
<li>Work with partners&#39; technical teams to ensure successful implementation, adoption and deployment of Anthropic&#39;s AI capabilities into their solutions</li>
<li>Coordinate with cloud providers (AWS, GCP) to align technical and commercial aspects of deals</li>
<li>Build deep relationships with key decision makers within partner organizations</li>
<li>Provide market intelligence and partner feedback to product teams to influence our roadmap and feature development</li>
<li>Create and maintain sales playbooks specific to SI and ISV sales motions</li>
<li>Track and forecast sales pipeline specific to the partner segment</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of enterprise sales experience selling directly to Systems Integrators and ISVs</li>
<li>Security clearances preferred</li>
<li>Strong track record of closing complex technical sales to partner organizations</li>
<li>Deep understanding of SI and ISV business models, buying processes, and technology evaluation criteria</li>
<li>Experience navigating technical requirements and security standards specific to public sector implementations</li>
<li>Proven ability to exceed revenue targets in partner-focused sales roles</li>
<li>Strong technical acumen and ability to engage with partners&#39; engineering teams</li>
<li>Experience coordinating with cloud providers in complex deal scenarios</li>
<li>Excellent communication skills and ability to present to both technical and business audiences</li>
<li>Strategic thinking combined with hands-on sales execution capabilities</li>
<li>Understanding of public sector procurement processes and how partners operate within them</li>
<li>A passion for safe and ethical AI development, with the ability to articulate its technical value to partner organizations</li>
</ul>
<p>Annual Salary: $360,000-$435,000 USD</p>
<p>This is a full-time role with a hybrid policy, requiring at least 25% of the time to be spent in the office. Visa sponsorship is available.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, Systems Integrators and ISVs, Security clearances, Complex technical sales, Public sector implementations, Cloud providers, Technical acumen, Communication skills, Strategic thinking, Public sector procurement processes, AI safety and research, Reliable, interpretable, and steerable AI systems, GTM, product, and marketing teams, Market intelligence and partner feedback, Sales playbooks, Sales pipeline forecasting</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5160180008</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77be5b8-7b6</externalid>
      <Title>Finance Expert - Risk</Title>
      <Description><![CDATA[<p>As a Finance Risk Expert at xAI, you will play a crucial role in advancing our cutting-edge AI systems by providing high-quality annotations, expert evaluations, and detailed risk reasoning using specialized labeling tools.</p>
<p>You will collaborate closely with technical teams to support the development and refinement of new AI capabilities, with a primary focus on quantitative financial risk management domains. Your expertise will drive the selection and rigorous resolution of complex risk-related problems, including market risk modeling, credit and counterparty risk, liquidity and funding risk, operational and model risk, stress testing &amp; scenario analysis, Value at Risk (VaR)/Expected Shortfall (ES), risk attribution, capital allocation (economic/regulatory), and enterprise-wide risk frameworks under regulatory regimes (Basel, Dodd-Frank, IFRS 9, etc.).</p>
<p>This role requires exceptional quantitative rigor, rapid adaptation to evolving guidelines, and the ability to deliver precise, technically sound critiques, derivations, and solutions in a fast-paced environment. As a Finance Risk Expert, you will directly support xAI&#39;s mission by helping train and refine frontier AI models. You will teach the models how risk professionals quantify uncertainties, model tail events, assess portfolio vulnerabilities, ensure regulatory compliance, perform stress testing, and make data-driven decisions to protect capital and maintain financial stability.</p>
<p>Your tasks may include recording audio walkthroughs of risk models, participating in video-based scenario reasoning, or producing detailed quantitative risk analysis traces. All outputs are considered work-for-hire and owned by xAI.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary annotation and evaluation software to deliver accurate labels, rankings, critiques, and comprehensive solutions on assigned projects</li>
<li>Consistently produce high-quality, curated data that adheres to strict quantitative and regulatory standards</li>
<li>Collaborate with engineers and researchers to develop and iterate on new training tasks, risk-specific benchmarks, and evaluation frameworks</li>
<li>Provide constructive feedback to improve the efficiency, precision, and usability of annotation and data-collection tools</li>
<li>Select and solve challenging problems from financial risk domains where you have deep expertise</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Master’s or PhD in a quantitative discipline: Quantitative Finance, Financial Engineering, Financial Mathematics, Statistics, Applied Mathematics, Econometrics, Risk Management, Operations Research, Physics, Computer Science (with risk/finance focus), or closely related field or equivalent professional experience as a quantitative risk analyst, risk modeler, or risk quant</li>
<li>Excellent written and verbal English communication (technical reports, regulatory documentation, explanatory breakdowns)</li>
<li>Strong familiarity with financial risk data sources and platforms (Bloomberg, Refinitiv, Moody’s Analytics, S&amp;P Capital IQ, RiskMetrics, internal bank risk systems, regulatory filings, Basel/FRB datasets, etc.)</li>
<li>Exceptional analytical reasoning, attention to detail, and ability to exercise sound judgment with incomplete or ambiguous data</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Professional experience in quantitative risk management, model development/validation, or risk analytics at a bank, hedge fund, asset manager, insurance company, regulator, or consulting firm</li>
<li>Track record of publication(s) or contributions in refereed journals/conferences on risk, econometrics, statistics, or quantitative finance</li>
<li>Prior teaching, mentoring, or training experience (university, industry workshops, regulatory training)</li>
<li>Proficiency in Python/R for risk modeling (pandas, NumPy, SciPy, statsmodels, QuantLib, PyTorch/TensorFlow for ML risk models, etc.) and familiarity with risk systems (Murex, Calypso, Numerix, etc.)</li>
<li>Experience with Monte Carlo simulation, copula models, stochastic processes, time-series analysis, extreme value theory, or machine learning for risk (anomaly detection, credit scoring, etc.)</li>
<li>Knowledge of regulatory capital frameworks (Basel III/IV, FRB CCAR, SR 11-7 model risk guidance, IFRS 9/CECL, Solvency II)</li>
<li>CFA, FRM, PRM, CQF, or similar risk-focused certifications</li>
<li>Previous exposure to large language models, AI safety, or quantitative evaluation pipelines</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs</li>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time</li>
<li>We are unable to provide visa sponsorship</li>
<li>For those who will be working from a personal device, your computer must meet xAI’s minimum hardware requirements</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Quantitative Finance, Financial Engineering, Financial Mathematics, Statistics, Applied Mathematics, Econometrics, Risk Management, Operations Research, Physics, Computer Science, Python, R, Monte Carlo simulation, copula models, stochastic processes, time-series analysis, extreme value theory, machine learning, Bloomberg, Refinitiv, Moody’s Analytics, S&amp;P Capital IQ, RiskMetrics, internal bank risk systems, regulatory filings, Basel/FRB datasets, Professional experience in quantitative risk management, model development/validation, or risk analytics at a bank, hedge fund, asset manager, insurance company, regulator, or consulting firm, Track record of publication(s) or contributions in refereed journals/conferences on risk, econometrics, statistics, or quantitative finance, Prior teaching, mentoring, or training experience (university, industry workshops, regulatory training), Proficiency in Python/R for risk modeling (pandas, NumPy, SciPy, statsmodels, QuantLib, PyTorch/TensorFlow for ML risk models, etc.) and familiarity with risk systems (Murex, Calypso, Numerix, etc.), Experience with Monte Carlo simulation, copula models, stochastic processes, time-series analysis, extreme value theory, or machine learning for risk (anomaly detection, credit scoring, etc.), Knowledge of regulatory capital frameworks (Basel III/IV, FRB CCAR, SR 11-7 model risk guidance, IFRS 9/CECL, Solvency II), CFA, FRM, PRM, CQF, or similar risk-focused certifications, Previous exposure to large language models, AI safety, or quantitative evaluation pipelines</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI is a technology company focused on developing artificial intelligence systems. It has a small team of highly motivated engineers.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5040365007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c771336-070</externalid>
      <Title>Communications Manager, Research</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic communications professional to join our team and drive external storytelling for Anthropic&#39;s research and model training work. This role will be the primary external comms partner to our model research and training teams that cover areas like Interpretability and Alignment , some of the most distinctive and technically deep research work happening in AI.</p>
<p>In this role, you&#39;ll craft compelling narratives that make frontier AI research and safety work more accessible and meaningful to journalists, policymakers, the broader research and safety community, and the public.</p>
<p>The ideal candidate is a creative storyteller with strong proactive and reactive muscle, has a genuine curiosity about science and research, and the judgment to navigate nuanced, novel topics with care. You should be able to move fast, think critically, and collaborate across a wide range of technical and non-technical teams.</p>
<p>This is a unique opportunity to shape how one of the world&#39;s leading AI labs communicates about the fundamental science of AI safety at a crucial moment for the field.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop and execute communications strategies across our research portfolio, including interpretability, alignment, model welfare, and model training, turning dense technical work into stories that resonate with distinct audiences.</li>
</ul>
<ul>
<li>Lead launch moments for major research publications, papers, and milestones, owning the narrative, media strategy, and cross-functional rollout.</li>
</ul>
<ul>
<li>Create thought leadership opportunities for research executives to build their profiles as leading voices in AI safety through speaking engagements, bylines, conferences, and media.</li>
</ul>
<ul>
<li>Build and maintain relationships with key science and technology journalists, research-focused outlets, and influencers in the AI and ML space.</li>
</ul>
<ul>
<li>Serve as a trusted comms partner to research leads and senior stakeholders</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years of experience working in communications, ideally with significant time spent on science, research, or deep-tech storytelling.</li>
</ul>
<ul>
<li>Strong record of building proactive communications campaigns around technical or scientific work that resonated with a diverse range of audiences.</li>
</ul>
<ul>
<li>Excellent at translating complex technical concepts into compelling messaging.</li>
</ul>
<ul>
<li>Media relations experience, including with reporters who cover tech, science, and research rather than just business or product news.</li>
</ul>
<ul>
<li>Experience working directly with researchers, scientists, or engineers on a variety of topics</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience in science communications, academic research communications, or deep-tech comms</li>
</ul>
<ul>
<li>Experience building executive brand and thought leadership programs , positioning technical leaders as credible, visible voices in their field through speaking, media, and owned content strategies.</li>
</ul>
<ul>
<li>Background working directly with research teams, academic labs, or R&amp;D functions at a technology company</li>
</ul>
<ul>
<li>Track record of launching research papers, scientific findings, or technical reports to press and public</li>
</ul>
<ul>
<li>Experience communicating about emerging or philosophically novel topics where public understanding is limited and the risk of misinterpretation is high</li>
</ul>
<ul>
<li>Familiarity with the AI safety and alignment landscape</li>
</ul>
<p><strong>Salary:</strong></p>
<p>$255,000-$255,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$255,000 USD</Salaryrange>
      <Skills>communications, storytelling, media relations, science writing, research communications, deep-tech comms, science communications, academic research communications, thought leadership, executive brand building, AI safety and alignment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153680008</Applyto>
      <Location>New York City, NY; San Francisco, CA; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>18c4817a-81e</externalid>
      <Title>Communications Lead, The Anthropic Institute</Title>
      <Description><![CDATA[<p>We are seeking an experienced communications professional to serve as a dedicated communications partner to the Head of Public Benefit. This is a rare opportunity to work alongside an executive leading some of the most consequential research of our time,work on the economic impacts of AI, the implications of self-improving systems, the changing offense and defense balance, and the societal effects of powerful AI on real people and communities.</p>
<p>This role sits at the intersection of executive communications, brand strategy, and portfolio management. You will be the single point of contact and dedicated communications advisor for Jack Clark, managing day-to-day communications requests and strategic engagements across his full portfolio. Working closely with the Institute&#39;s communications, policy, and research teams, you will ensure that the Institute&#39;s work is coordinated, consistent, and lands with the audiences it deserves,far beyond the AI and policy communities.</p>
<p>Key Focus Areas:</p>
<p>Executive Communications and Brand Platform for Jack Clark</p>
<ul>
<li>Serve as the single point of contact and dedicated communications advisor to Jack Clark, Anthropic’s Head of Public Benefit and leader of The Anthropic Institute</li>
<li>Lead the creation and execution of Jack’s personal brand platform, defining how he shows up across media interviews, podcasts, speaking engagements, written commentary, and social media</li>
<li>Assist in preparing Jack for high-profile media engagements and public speaking, developing talking points, briefing materials, and post-engagement analysis</li>
<li>Identify and secure strategic media opportunities, panel placements, and speaking engagements that advance both Jack’s platform and the Institute’s mission</li>
</ul>
<p>Support for Anthropic Institute Communications</p>
<ul>
<li>Work with communications colleagues to develop an overarching communications strategy for The Anthropic Institute, tracking key areas of work across the Economic Index, Societal Impacts Research, and beyond</li>
<li>Work with members of the policy and editorial teams to design the Institute&#39;s publishing and content strategy, ensuring outputs reach the largest possible audience,whether through the Anthropic blog, the Institute&#39;s own site, microsites, interactive experiences, video, audio, or other formats</li>
<li>Translate complex research across economics, societal impacts, frontier red teaming, and AI safety into compelling public narratives that reach audiences beyond the AI and policy communities</li>
<li>Build and maintain relationships with media, researchers, and thought leaders across economics, labor, national security, and general interest outlets,not just technology press</li>
<li>Lead communications for select Institute publications and projects</li>
<li>Coordinate with Anthropic’s broader communications, editorial, and policy teams to keep messaging aligned while maintaining the Institute’s distinct voice and mission</li>
</ul>
<p>Responsibilities</p>
<ul>
<li>Serve as the single point of contact for Jack Clark across all communications needs, managing intake, prioritization, and day-to-day logistics for his portfolio</li>
<li>Coordinate across the Institute’s communications team, policy team, and editorial team to ensure Jack’s priorities are represented and his voice is consistent across all external moments</li>
<li>Lead the creation and ongoing execution of Jack’s brand platform across all channels</li>
<li>Translate technical research on economics, societal impacts, AI safety, and frontier capabilities into narratives accessible to diverse audiences, in partnership with the Institute’s research communications team</li>
<li>Prepare Jack and other Institute leaders for media interviews, podcast appearances, congressional hearings, and speaking engagements</li>
<li>Build and maintain relationships with media across technology, economics, national security, labor, and general interest verticals</li>
<li>Create high-quality content across formats,blog posts, research summaries, briefing documents, talking points, social content, and innovative digital formats</li>
<li>Build repeatable playbooks and processes that allow a lean team to punch well above its weight</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Have 10+ years of experience in communications, with significant depth in executive communications, thought leadership, or public interest/policy communications</li>
<li>Have a track record of translating complex, technical, or academic research into public narratives that reach audiences well beyond specialist communities</li>
<li>Can embrace the “weird” - AI is a new field where the unexpected and unusual happens all the time.</li>
<li>Have built and executed brand platforms for senior executives, thought leaders, or public intellectuals,and have managed their day-to-day communications logistics</li>
<li>Are equally comfortable crafting high-level messaging strategy and producing content under deadline</li>
<li>Have strong media relationships across a range of verticals,technology, economics, national security, policy, and/or general interest</li>
<li>Can move between strategic messaging and fast-turnaround tactical execution without losing quality</li>
<li>Are intellectually curious about AI’s impact on the economy, society, national security, and the future of work,and can engage substantively with researchers working on these problems</li>
<li>Are excited about using AI tools in your own workflows to multiply what a small team can do</li>
<li>Care deeply about AI safety, responsible technology development, and Anthropic’s public benefit mission</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$320,000 USD</Salaryrange>
      <Skills>communications, executive communications, brand strategy, portfolio management, media relations, public speaking, content creation, research analysis, AI safety, responsible technology development, AI tools, data analysis, content marketing, social media management, event planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>The Anthropic Institute</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>The Anthropic Institute is a public benefit corporation building some of the world&apos;s most powerful artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155269008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2999b795-846</externalid>
      <Title>Capital Markets &amp; Investor Relations</Title>
      <Description><![CDATA[<p>As a key member of the Capital Markets &amp; Corporate Development team at Anthropic, you&#39;ll play a central role in shaping our financial strategy and capital structure during a critical period of growth.</p>
<p>You&#39;ll lead capital raising initiatives, manage investor relationships, and help prepare Anthropic for the next phase of its evolution as a company. Working closely with our leadership team, you&#39;ll help ensure Anthropic has the financial resources and strategic partnerships needed to fulfill our mission of building reliable, interpretable, and steerable AI systems.</p>
<p>In this role, you&#39;ll leverage your expertise across capital markets and financial strategy to drive fundraising activities, build robust investor relations frameworks, and lay the groundwork for long-term financial flexibility. You&#39;ll also support selective corporate development opportunities that align with our strategic priorities.</p>
<p>The ideal candidate brings deep capital markets experience, strong analytical capabilities, and exceptional relationship-building skills to help guide Anthropic through its next phase of growth while maintaining our commitment to responsible AI development.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead capital raising processes, working with executive leadership to determine timing, structure, and terms for potential financing rounds</li>
</ul>
<ul>
<li>Build and maintain relationships with existing and potential investors across institutional, strategic, and financial investor bases</li>
</ul>
<ul>
<li>Develop comprehensive investor relations strategies, including communications, reporting frameworks, and engagement plans</li>
</ul>
<ul>
<li>Help build financial infrastructure and reporting capabilities to support institutional-grade transparency and governance</li>
</ul>
<ul>
<li>Track and analyze market conditions, comparable transactions, and valuation benchmarks to inform capital strategy</li>
</ul>
<ul>
<li>Identify and evaluate strategic investment opportunities and M&amp;A transactions aligned with Anthropic&#39;s mission</li>
</ul>
<ul>
<li>Create detailed financial models, valuation analyses, and market research to support strategic decision-making</li>
</ul>
<ul>
<li>Prepare and present recommendations to leadership and the board on capital structure and financing strategies</li>
</ul>
<ul>
<li>Collaborate with Finance, Legal, and Comms teams to align financial and strategic initiatives with organizational priorities</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in investment banking, equity capital markets, private equity, venture capital, or similar roles with significant capital markets exposure</li>
</ul>
<ul>
<li>Possess deep knowledge of capital markets, financial instruments, transaction structures, and institutional investor perspectives</li>
</ul>
<ul>
<li>Have a proven track record of successfully executing capital raises or advising on financing transactions</li>
</ul>
<ul>
<li>Demonstrate exceptional financial modeling and analytical capabilities</li>
</ul>
<ul>
<li>Are a strategic thinker who can connect financial decisions to long-term organizational goals</li>
</ul>
<ul>
<li>Have excellent communication skills and can effectively engage with diverse stakeholders including investors, executives, and technical teams</li>
</ul>
<ul>
<li>Thrive in fast-paced environments and can manage multiple complex projects simultaneously</li>
</ul>
<ul>
<li>Show sound judgment when evaluating risks and opportunities in ambiguous situations</li>
</ul>
<ul>
<li>Are passionate about AI safety and align with Anthropic&#39;s mission to develop AI systems that are reliable, interpretable, and steerable</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience in technology or AI-related industries</li>
</ul>
<ul>
<li>Bring experience from companies that have scaled through major growth transitions or prepared for significant capital markets events</li>
</ul>
<ul>
<li>Possess advanced degrees in finance, business, or related fields</li>
</ul>
<ul>
<li>Have worked with both private and public companies, understanding the requirements and expectations at different stages</li>
</ul>
<ul>
<li>Demonstrate knowledge of AI research and development landscapes</li>
</ul>
<ul>
<li>Show intellectual curiosity about the technical aspects of AI safety and alignment</li>
</ul>
<ul>
<li>Have a strong professional network in relevant investment communities</li>
</ul>
<p>Bring experience working in high-growth, mission-driven organizations</p>
<p>Annual compensation range for this role is $250,000-$310,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$250,000-$310,000 USD</Salaryrange>
      <Skills>Investment banking, Equity capital markets, Private equity, Venture capital, Financial modeling, Analytical capabilities, Relationship-building skills, Capital raising, Investor relations, Financial infrastructure, Reporting capabilities, Market analysis, Valuation benchmarks, Strategic investment opportunities, M&amp;A transactions, Financial models, Valuation analyses, Market research, AI safety, Responsible AI development, High-growth industries, Mission-driven organizations, Strong professional network, Intellectual curiosity, Technical aspects of AI safety and alignment</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5116167008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>368082f3-20f</externalid>
      <Title>Account Executive, Mid Market - UKI</Title>
      <Description><![CDATA[<p>As a Mid Market Account Executive at Anthropic, you&#39;ll drive adoption of safe, frontier AI across EMEA , selling into companies of roughly 500 to 2,500 employees, some already building with AI and others just beginning to adopt it.</p>
<p>You&#39;ll bring a consultative sales approach to a wide range of buyers, from engineering and product leaders evaluating the technology to operations and commercial leaders focused on measurable ROI. In close partnership with GTM, product, and marketing, you&#39;ll help sharpen our value proposition, sales motion, and positioning for the mid-market.</p>
<p>The ideal candidate is energised by meeting customers wherever they are on the AI adoption curve , across industries, company types, and levels of technical maturity. You&#39;ll build consensus among diverse stakeholders and execute strategies that drive sustainable, responsible adoption of Anthropic&#39;s technology.</p>
<p>Responsibilities: Drive new business revenue by navigating complex organisations to reach decision-makers and educate them on practical AI applications Execute across a range of buying motions , from fast, product-led technical evaluations to multi-stakeholder procurement , to exceed revenue quota Identify use cases across product, engineering, and operational functions, and collaborate cross-functionally to position Claude as a practical solution Build consensus among engineering and product leaders, C-suite executives, IT, operations, and procurement teams around AI adoption Gather customer feedback to inform product roadmaps and sharpen value propositions for mid-market organisations Refine our mid-market sales methodology by feeding learnings into playbooks and optimising processes across a range of cycle lengths and buyer types</p>
<p>You may be a good fit if you have: 8+ years of B2B software sales experience, with 5+ years closing in mid-market or enterprise accounts Experience selling into the mid-market across any sector , SaaS, infrastructure, vertical software, financial services, healthcare, manufacturing, or otherwise. We care about the selling muscle and the buyer complexity you&#39;ve handled, not the specific industry Track record of closing $100K–$5M deals across cycle lengths ranging from weeks (product-led, technical buyers) to quarters (consensus-driven procurement) Proven ability to navigate complex procurement processes and build consensus among diverse stakeholder groups A consultative selling approach that meets buyers where they are , going deep with technical evaluators and translating to business outcomes with commercial stakeholders History of exceeding quota while managing a mixed book of fast-moving and complex accounts Strong communication skills, with range to engage audiences from technical teams to C-level executives Credibility with technical stakeholders , you&#39;ve sold to engineering or IT leaders, held your own in a technical evaluation, and partnered closely with solutions engineering without hiding behind them The ability to articulate ROI frameworks and demonstrate measurable business outcomes A passion for AI and commitment to its safe, responsible deployment Comfort building in ambiguity , this is an early GTM team in EMEA and the motion is still being shaped. You&#39;ll help shape it</p>
<p>Annual compensation range for this role is €155,000-€205,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different: We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€155,000-€205,000 EUR</Salaryrange>
      <Skills>B2B software sales experience, Mid-market sales, Complex procurement processes, Consultative selling approach, Technical stakeholders, ROI frameworks, Measurable business outcomes, AI safety and responsible deployment</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4948535008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1811a69-c2f</externalid>
      <Title>Manager, Safety Operations</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>xAI is seeking a Manager, Safety Operations to oversee the processing of appeals and ensure proper labeling of use cases in the system.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Guide the team&#39;s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
<li>Ensure the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
<li>Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.</li>
<li>Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.</li>
<li>Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.</li>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
<li>Ability to interpret, apply, and train teams on xAI safety policies effectively.</li>
<li>Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.</li>
<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.</li>
<li>Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.</li>
<li>Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.</li>
<li>Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.</li>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
<li>Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Leadership and people management experience in AI-driven operations, Expertise in improving Large Language Models (LLMs), Proven experience in online safety and reducing harm, Ability to interpret, apply, and train teams on xAI safety policies, Proficiency in analyzing complex scenarios and operational metrics, Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills, Quality assurance: Ability to hold the team to our high standard for quality work, Commitment to continuous improvement of processes, people, and operations, Expertise in data analysis to identify emerging abuse vectors, Experience managing teams in Trust and Safety for a social media company, Expertise in leading red-teaming and adversarial testing of Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090695007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7896f519-fc9</externalid>
      <Title>Research Scientist, Safety and Alignment for Humanoid Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Research Scientist to join our Robotics team, whose mission is to build embodied AI responsibly to benefit people in the physical world. As a Research Scientist, you will design, implement, train, and evaluate large models and algorithms for humanoid robots. Your areas of focus will include algorithmic and model development to improve a robot agent&#39;s understanding of its own embodiment and VLA capabilities, learned policies for appropriate responses around people, and responses in atypical situations such as actuator faults. You will also work on Human Robot Interaction, write software to implement research ideas, and leverage your expertise to participate in a wide variety of research, including learning from simulation, reinforcement learning, learning from demonstrations, vision-language-action models, transformers, video generation, robot control, and more.</p>
<p>To succeed in this role, you will need a PhD in a technical field or equivalent practical experience, knowledge of the latest in large machine learning research, and experience working with real-world robots. Expertise in using large datasets with deep neural networks to make real robots useful is also an advantage.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$141,000 - $202,000 + bonus + equity + benefits</Salaryrange>
      <Skills>PhD in a technical field or equivalent practical experience, Knowledge of the latest in large machine learning research, Experience working with real-world robots, Research track record in one or more of the following topics: Humanoid Whole Body Control, Vision Language Action models; Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning and Reinforcement Learning, Sim2Real Transfer, Alignment Techniques, Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning and Reinforcement Learning, Sim2Real Transfer, Alignment Techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a leading artificial intelligence research organisation that uses its technologies for widespread public benefit and scientific discovery.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7576917</Applyto>
      <Location>New York City, New York, US</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>d2f7de87-5de</externalid>
      <Title>Chemist (FTC - 12 Month Fixed Term Contract)</Title>
      <Description><![CDATA[<p>Job Title: Chemist (FTC - 12 Month Fixed Term Contract)</p>
<p>As a Chemist in the Responsible Development &amp; Innovation (ReDI) team at Google DeepMind, you will be a principal architect of the safety protocols governing the intersection of Large Language Models (LLMs) and the chemical sciences. You will design and execute rigorous safety evaluations and inform mitigation strategies that ensure our frontier models accelerate scientific discovery without compromising global security.</p>
<p>This role is pivotal in deciding when and how our most advanced AI systems are released to the world.</p>
<p>You will apply your knowledge of chemistry to devise evaluation methodologies (e.g. red-teaming, knowledge elicitation studies, etc.) and contribute to building and running these evaluations on new models. You will analyse the results from evaluations, communicate them clearly to advise and inform decision-makers on the safety of our AI systems, and use them to refine our harm frameworks and inform our mitigation strategies.</p>
<p>In this role, you will work closely with other Subject-Matter Experts (SMEs) in the chemical, biological, radiological and nuclear domains, Research Engineers and Research Scientists focused on developing AI systems, as well as experts in AI ethics and policy.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Architect of Safety Evaluations: Build rigorous, scalable frameworks to evaluate model proficiency in overcoming key bottlenecks in CWA precursor acquisition, chemical synthesis, and weaponisation.</li>
<li>Strategic Advisory: Analyse evaluation results to brief executive decision-makers on model safety, directly influencing deployment &#39;Go/No-Go&#39; decisions.</li>
<li>Harm Framework Innovation: Refine our internal safety taxonomies to account for emergent risks at the intersection of general AI and specialist models like AlphaFold.</li>
<li>Collaborative Mitigation: Partner with Research Engineers to revise mitigation strategies and refine harm frameworks for identified chemical risks. Work with other SMEs in the chemical, biological, radiological, nuclear, and conventional explosive domains to build a unified defence against CBRNE-related risks.</li>
<li>External Engagement: Stay abreast of global chemical security trends and international non-proliferation policy through engagement with external international, governmental, and non-governmental organisations.</li>
</ul>
<p>About You:</p>
<p>You are a seasoned scientist who bridges the gap between laboratory chemistry and emerging technology. You are motivated by the challenge of defending complex systems and possess the critical mindset required to anticipate non-obvious misuse scenarios.</p>
<p>Minimum Qualifications:</p>
<ul>
<li>Chemistry Expertise: PhD in synthetic organic chemistry with at least two years post-doctoral or equivalent experience.</li>
<li>Publication Record: Proven experience publishing as a first author in high-impact general science or chemistry-specific journals, and presenting work at international chemistry conferences. Classified or internal reporting experience will be considered in lieu of public records for candidates from roles in national security.</li>
<li>Security Domain Expertise: Comprehensive understanding of the Chemical Weapons Convention (CWC) and other national and international CWA agreements/treaties, chemical defence protocols, and the landscape of dual-use research in the chemical domain.</li>
<li>Systems Thinking: The ability to translate high-level chemical risks into technical requirements for AI safety.</li>
<li>Communication Excellence: A proven ability in distilling complex technical findings into clear, actionable advice for non-specialist stakeholders.</li>
</ul>
<p>Preferred Experience:</p>
<ul>
<li>Knowledge of CWA defence, including synthesis, detection, and countermeasures.</li>
<li>Direct experience with CBRNE mitigation, non-proliferation, or relevant international security stakeholders.</li>
<li>Familiarity with the machine learning lifecycle and AI Safety Frameworks.</li>
<li>Experience using and/or developing computational chemistry tools (e.g., AlphaFold, retrosynthesis engines, etc.).</li>
<li>Working knowledge of the Frontier Safety Framework (FSF), Critical Capability Levels (CCLs), and similar documents published by other leading AI labs.</li>
<li>Understanding of Google DeepMind AI research output (e.g., AlphaFold, GNoME, WeatherNext, etc.), and AI products (e.g., Gemini, Nano Banana, Genie, etc.).</li>
<li>Passion for the ethical deployment of frontier technologies and AI policy.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000 - $244,000 + bonus + equity + benefits</Salaryrange>
      <Skills>PhD in synthetic organic chemistry, Post-doctoral or equivalent experience, Publication record in high-impact general science or chemistry-specific journals, Presentation experience at international chemistry conferences, Comprehensive understanding of the Chemical Weapons Convention (CWC), Chemical defence protocols, Dual-use research in the chemical domain, Systems thinking, Communication excellence, Knowledge of CWA defence, Direct experience with CBRNE mitigation, Non-proliferation or relevant international security stakeholders, Machine learning lifecycle and AI Safety Frameworks, Computational chemistry tools, Frontier Safety Framework (FSF), Critical Capability Levels (CCLs), Google DeepMind AI research output, AI products, Ethical deployment of frontier technologies and AI policy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7688901</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>d63f049e-ad7</externalid>
      <Title>Security Lead, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Security Lead, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the &#39;Agentic Launch Gap&#39;; the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p>As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.</li>
<li>Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.</li>
<li>Design Automated Validation Systems: Collaborate with Google teams to engineer &#39;Auto RedTeaming&#39; solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.</li>
<li>Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.</li>
<li>Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.</li>
<li>Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.</li>
</ul>
<p>About You:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).</li>
<li>Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.</li>
<li>Experience managing or technically leading small, high-performance engineering teams.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Familiarity with AI safety benchmarks and evaluation frameworks.</li>
<li>Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.</li>
<li>Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.</li>
</ul>
<p>The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,000 - $349,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science, Information Security, or equivalent practical experience, Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning, Deep technical understanding of LLM architectures and agentic workflows, Proven ability to work in a consulting capacity with product teams, Experience managing or technically leading small, high-performance engineering teams, Hands-on experience developing exploits for GenAI models, Familiarity with AI safety benchmarks and evaluation frameworks, Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers, Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7560787</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>f3d5bc25-c76</externalid>
      <Title>Research Scientist, Safety and Alignment for Humanoid Robotics</Title>
      <Description><![CDATA[<p>At Google DeepMind, we&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. We&#39;re looking for Research Scientists to join the Robotics team whose mission is to &#39;Build embodied AI responsibly to benefit people in the physical world.&#39;</p>
<p>Our team is focused on ensuring safe humanoid robot actions spanning agentic reasoning, HRI scenarios, and physical safety with VLA models. As a Research Scientist, you will design, implement, train, and evaluate large models and algorithms for humanoid robots. You will make breakthroughs and unlock new humanoid safety capabilities, including algorithmic and model development to improve a robot agent&#39;s understanding of its own embodiment and VLA capabilities.</p>
<p>You will write software to implement research ideas and iterate quickly. You will leverage your expertise to participate in a wide variety of research, including learning from simulation, reinforcement learning, learning from demonstrations, vision-language-action models, transformers, video generation, robot control, humanoid robots, and more.</p>
<p>You will work effectively with a large collaborative team with fast-paced agendas to meet ambitious research goals. You will generate creative ideas, set up experiments, and test hypotheses. You will report and present research findings clearly and efficiently both internally and externally.</p>
<p>To be successful as a Research Scientist at Google DeepMind, we look for PhDs in technical fields or equivalent practical experience. You should have knowledge of the latest in large machine learning research and experience working with real-world robots. Expertise with a subset of the following topics would be an advantage: Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning, and Reinforcement Learning.</p>
<p>The US base salary range for this full-time position is between $141,000 - $202,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$141,000 - $202,000 + bonus + equity + benefits</Salaryrange>
      <Skills>PhD in a technical field, Knowledge of large machine learning research, Experience working with real-world robots, Humanoid Whole Body Control, Vision Language Action models, Motion Planning, Force Control, AI Safety, Diffusion Policies, World Models, Imitation Learning, Reinforcement Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a technology company that uses artificial intelligence to advance the state of the art in AI. It was founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7576917</Applyto>
      <Location>New York City, New York, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>f73f108d-30a</externalid>
      <Title>Senior Security Engineer, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Senior Security Engineer, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</p>
<p><strong>About Us</strong> The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the &#39;Agentic Launch Gap&#39;,the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p><strong>The Role</strong> As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work &#39;in the room&#39; with product builders, identifying architectural flaws during the design phase long before formal reviews begin.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.</li>
<li>Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.</li>
<li>Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks (&#39;Auto Red Teaming&#39;) that prevent regression in future model versions.</li>
<li>Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.</li>
<li>Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.</li>
</ul>
<p><strong>About You</strong> In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Strong coding skills in Python, Go, or C++ with experience building security tools or automation.</li>
<li>Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Experience working in a consulting capacity with product teams or in a fast-paced &#39;startup-like&#39; environment.</li>
<li>Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.</li>
<li>Ability to translate complex probabilistic risks into actionable engineering fixes for developers.</li>
</ul>
<p><strong>Salary &amp; Benefits</strong> The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000 - $244,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Python, Go, C++, Red Teaming, Offensive Security, Adversarial Machine Learning, LLM architectures, agentic workflows, chain-of-thought reasoning, AI vulnerability classes, prompt injection, adversarial examples, training data extraction, AI safety benchmarks, evaluation frameworks, fuzzing techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a technology company that specializes in artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7596438</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US; Zurich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>c3804660-339</externalid>
      <Title>Member of Technical Staff, AI Safety Post-Training</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff, AI Safety Post-Training, you will work to develop and implement cutting-edge safety methodologies for post-training large language models with agentic and reasoning capabilities that are served to millions of users through Copilot every day.</p>
<p>We work on the bleeding edge and leverage the most powerful pretrained models and algorithms, making it critical that we ensure our AI systems behave safely and align with organisational values.</p>
<p>You will be responsible for designing novel safety evaluation frameworks, curating high-quality data for robust evaluations and training, prototyping new safety capabilities, and developing safety-focused fine-tuning algorithms.</p>
<p>We’re looking for outstanding individuals with deep expertise in AI safety who can translate research insights into practical solutions while being a strong communicator and collaborative teammate.</p>
<p>The ideal candidate takes the initiative in exploring new safety methodologies and enjoys building world-class, trustworthy AI experiences in a fast-paced applied research environment.</p>
<p>Microsoft’s mission is to empower every person and every organisation on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realise our shared goals.</p>
<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</p>
<p>This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>AI safety, large language models, agentic and reasoning capabilities, safety evaluation frameworks, data curation, safety-focused fine-tuning algorithms, C, C++, C#, Java, JavaScript, Python, responsible AI, software engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-ai-safety-post-training-mai-super-intelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>8ba25656-b84</externalid>
      <Title>Member of Technical Staff, AI Safety Post-Training</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff, AI Safety Post-Training, you will work to develop and implement cutting-edge safety methodologies for post-training large language models with agentic and reasoning capabilities that are served to millions of users through Copilot every day.</p>
<p>We work on the bleeding edge and leverage the most powerful pretrained models and algorithms, making it critical that we ensure our AI systems behave safely and align with organisational values.</p>
<p>You will be responsible for designing novel safety evaluation frameworks, curating high-quality data for robust evaluations and training, prototyping new safety capabilities, and developing safety-focused fine-tuning algorithms.</p>
<p>We’re looking for outstanding individuals with deep expertise in AI safety who can translate research insights into practical solutions while being a strong communicator and collaborative teammate.</p>
<p>The ideal candidate takes the initiative in exploring new safety methodologies and enjoys building world-class, trustworthy AI experiences in a fast-paced applied research environment.</p>
<p>Responsibilities:</p>
<p>Leverage expertise in AI safety to uncover potential risks and develop novel mitigation strategies, including alignment techniques, constitutional AI approaches, RLHF, and robustness improvements for large language models.</p>
<p>Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes.</p>
<p>Build automated safety testing systems, generalise safety solutions into repeatable frameworks, and write efficient code for safety model pipelines and intervention systems.</p>
<p>Maintain a user-oriented perspective by understanding safety needs from user perspectives, validating safety approaches through user research, and serving as a trusted advisor on AI safety matters.</p>
<p>Track advances in AI safety research, identify relevant state-of-the-art techniques, and adapt safety algorithms to drive innovation in production systems serving millions of users.</p>
<p>Embody our culture and values.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Preferred Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Master’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Experience prompting and working with large language models.</p>
<p>Experience writing production-quality Python code.</p>
<p>Demonstrated interest in Responsible AI.</p>
<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</p>
<p>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</p>
<p>This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.</p>
<p>Microsoft is an equal opportunity employer.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>AI safety, large language models, agentic and reasoning capabilities, pretrained models and algorithms, safety evaluation frameworks, red-teaming methodologies, automated safety testing systems, safety model pipelines and intervention systems, user-oriented perspective, user research, AI safety research, safety algorithms, Python, C, C++, C#, Java, JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-ai-safety-post-training-mai-super-intelligence-team-3/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a4208de4-f7e</externalid>
      <Title>Member of Technical Staff, AI Safety Post-Training</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff, AI Safety Post-Training, you will work to develop and implement cutting-edge safety methodologies for post-training large language models with agentic and reasoning capabilities that are served to millions of users through Copilot every day.</p>
<p>We work on the bleeding edge and leverage the most powerful pretrained models and algorithms, making it critical that we ensure our AI systems behave safely and align with organisational values.</p>
<p>You will be responsible for designing novel safety evaluation frameworks, curating high-quality data for robust evaluations and training, prototyping new safety capabilities, and developing safety-focused fine-tuning algorithms.</p>
<p>We’re looking for outstanding individuals with deep expertise in AI safety who can translate research insights into practical solutions while being a strong communicator and collaborative teammate.</p>
<p>The ideal candidate takes the initiative in exploring new safety methodologies and enjoys building world-class, trustworthy AI experiences in a fast-paced applied research environment.</p>
<p>Microsoft’s mission is to empower every person and every organisation on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realise our shared goals.</p>
<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</p>
<p>This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>AI safety, large language models, agentic and reasoning capabilities, safety evaluation frameworks, data curation, safety-focused fine-tuning algorithms, Python, C, C++, C#, Java, JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-ai-safety-post-training-mai-super-intelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>68c29e94-faa</externalid>
      <Title>Technical Cyber Threat Investigator</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are looking for a Technical Cyber Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for malicious cyber operations.</p>
<p>You will work at the intersection of AI safety and cybersecurity, conducting thorough investigations into potential misuse cases, developing novel detection techniques, and building robust defenses against emerging cyber threats in the rapidly evolving landscape of AI-enabled risks. Your work will directly protect the broader ecosystem from sophisticated threat actors who seek to leverage AI technology for harm.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for cyber operations, including influence operations, malware development, social engineering, and other adversarial activities</li>
</ul>
<ul>
<li>Develop abuse signals and tracking strategies to proactively detect sophisticated threat actors across our platform</li>
</ul>
<ul>
<li>Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, using open-source research, dark web monitoring, and internal data</li>
</ul>
<ul>
<li>Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm at scale</li>
</ul>
<ul>
<li>Study trends internally and in the broader ecosystem to anticipate how AI systems could be misused, generating and publishing reports</li>
</ul>
<ul>
<li>Build and maintain relationships with external threat intelligence partners, information sharing communities, and government stakeholders</li>
</ul>
<ul>
<li>Work cross-functionally to build out our threat intelligence program, establishing processes, tools, and best practices</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with large language models and understanding of how AI technology could be misused for cyber threats</li>
</ul>
<ul>
<li>Have subject matter expertise in abusive user behaviour detection, such as influence operations, coordinated inauthentic behaviour, or cyber threat intelligence</li>
</ul>
<ul>
<li>Have experience tracking threat actors across surface, deep, and dark web environments</li>
</ul>
<ul>
<li>Can derive insights from large datasets to make key decisions and recommendations</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilising threat intelligence frameworks (MITRE ATT&amp;CK, etc.)</li>
</ul>
<ul>
<li>Have strong project management skills and ability to build processes from the ground up</li>
</ul>
<ul>
<li>Possess excellent communication skills to collaborate with cross-functional teams and present to leadership</li>
</ul>
<p><strong>Strong candidates may also have</strong></p>
<ul>
<li>Experience working with government agencies or in regulated environments</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p><strong>Deadline to apply</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/career</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000 - $290,000 USD</Salaryrange>
      <Skills>SQL, Python, large language models, AI technology, cyber threats, abusive user behaviour detection, threat actor profiling, threat intelligence frameworks, project management, communication skills, experience working with government agencies, background in AI safety, machine learning security, technology abuse investigation, experience building and scaling threat detection systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation working to create reliable, interpretable, and steerable AI systems. Its mission is to make AI safe and beneficial for users and society.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066995008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>6aa46bac-783</externalid>
      <Title>Software Engineer, Cybersecurity Products</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We&#39;re looking for engineers to join a new effort building AI-powered products and capabilities for cybersecurity. You&#39;ll work across the stack to prototype new ideas and build from the ground up.</p>
<p>This role sits at the intersection of research, product, and go-to-market. You&#39;ll work closely with research teams to develop new model capabilities for security applications, prototype and iterate quickly to validate ideas, and engage directly with customers and partners to inform what we build. The right candidate has the technical depth to engage with research, the product instincts to know what&#39;s worth building, and the drive to move fast.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Prototype and build new AI-powered products for cybersecurity</li>
</ul>
<ul>
<li>Iterate quickly based on customer feedback and what you learn</li>
</ul>
<ul>
<li>Collaborate with research teams to identify and develop new model capabilities for security applications</li>
</ul>
<ul>
<li>Engage directly with customers and partners to understand workflows and inform product direction</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 7+ years of experience as a software engineer</li>
</ul>
<ul>
<li>Experience developing cybersecurity products</li>
</ul>
<ul>
<li>Enjoy fast iteration and are energized by prototyping new ideas</li>
</ul>
<ul>
<li>Have strong product instincts and enjoy defining what to build, not just how to build it</li>
</ul>
<ul>
<li>Are comfortable working closely with research and go-to-market teams</li>
</ul>
<ul>
<li>Have strong communication skills and can work effectively across functions</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience in incident response, reverse engineering, network analysis, penetration testing, or similar fields</li>
</ul>
<ul>
<li>Experience working with AI/ML models and building products on top of them</li>
</ul>
<ul>
<li>Experience building agentic applications</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. <strong>Guidance on Candidates&#39; AI Usage:</strong> Learn about our policy for using AI in our application process</p>
<p>Interested in building your career at Anthropic? Get future opportunities by following us on LinkedIn and Twitter.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>software engineer, cybersecurity products, AI/ML models, incident response, reverse engineering, network analysis, penetration testing, agentic applications, circuit-based interpretability, multimodal neurons, scaling laws, AI &amp; compute, concrete problems in AI safety, learning from human preferences</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation headquartered in San Francisco, with a mission to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5063007008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>557894f1-074</externalid>
      <Title>Prompt Engineer, Agent Prompts &amp; Evals</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for prompt and context engineers to join our product engineering team to help build AI-first products, features, and evaluations. Your mission will be to bridge the gap between model capabilities and real product experience, working with product teams to build consistent, safe, and beneficial user experiences across all product surfaces.</p>
<p>You will be deeply involved in new product feature and model releases at Anthropic, combining engineering expertise with an understanding of frontier AI applications and model quality. You’ll become an expert on Claude’s behavioural quirks and capabilities and apply that knowledge to deliver the best possible user experience across models and domains. You’ll be the first resource for product teams working on Claude’s AI infrastructure: system prompts, tool prompts, skills, and evaluations.</p>
<p>This role requires someone who can effectively balance caring deeply about making Claude the best it can be while also supporting a wide variety of concurrent projects and efforts across many product teams.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li><strong>Prompt Engineering Excellence:</strong> Design, test, and optimise system prompts and feature-specific prompts that shape Claude’s behaviour across consumer and API products.</li>
</ul>
<ul>
<li><strong>Evaluation Development:</strong> Build and maintain comprehensive evaluation suites that ensure model quality and consistency across product launches and updates.</li>
</ul>
<ul>
<li><strong>Cross-functional Collaboration:</strong> Partner closely with product teams, research teams, and safeguards to ensure new features meet quality and safety standards.</li>
</ul>
<ul>
<li><strong>Model Launch Support:</strong> Play a critical role in model releases, ensuring smooth rollouts and catching regressions before they impact users.</li>
</ul>
<ul>
<li><strong>Infrastructure Contribution:</strong> Help build and improve the frameworks and tools that allow teams to develop and test prompts and features with confidence.</li>
</ul>
<ul>
<li><strong>Knowledge Transfer:</strong> Mentor product engineers on prompt engineering best practices and help teams build their first evaluations.</li>
</ul>
<ul>
<li><strong>Rapid Iteration:</strong> Work in a fast-paced environment where model capabilities advance daily, requiring quick adaptation and creative problem-solving.</li>
</ul>
<p><strong>What We’re Looking For</strong></p>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience with Python or similar languages.</li>
</ul>
<ul>
<li>Demonstrated experience with LLMs and prompt engineering (through work, research, or significant personal projects).</li>
</ul>
<ul>
<li>Strong understanding of evaluation methodologies and metrics for AI systems.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills – you’ll need to explain complex model behaviours to diverse stakeholders.</li>
</ul>
<ul>
<li>Ability to manage multiple concurrent projects and prioritise effectively.</li>
</ul>
<ul>
<li>Experience with version control, CI/CD, and modern software development practices.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with Claude or other frontier AI models in production settings.</li>
</ul>
<ul>
<li>Background in machine learning, NLP, or related fields.</li>
</ul>
<ul>
<li>Experience with A/B testing and experimentation frameworks (e.g., Statsig).</li>
</ul>
<ul>
<li>Familiarity with AI safety and alignment considerations.</li>
</ul>
<ul>
<li>Experience building tools and infrastructure for ML/AI workflows.</li>
</ul>
<ul>
<li>Track record of improving AI system performance through systematic evaluation and iteration.</li>
</ul>
<p><strong>You Might Thrive in This Role If You…</strong></p>
<ul>
<li>Get excited about the nuances of how language models behave and love finding creative ways to improve their outputs.</li>
</ul>
<ul>
<li>Enjoy being at the intersection of research and product, translating cutting-edge capabilities into user value.</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and can define success metrics for novel AI features.</li>
</ul>
<ul>
<li>Have a strong sense of ownership and drive projects from conception to production.</li>
</ul>
<ul>
<li>Are passionate about building AI systems that are helpful, harmless, and honest.</li>
</ul>
<ul>
<li>Thrive in collaborative environments and enjoy teaching others.</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, we want to remind you that we will never ask you to pay any fees for the hiring process. If someone contacts you claiming to be from Anthropic and asks for money, please report it to us immediately.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000USD</Salaryrange>
      <Skills>Python, LLMs, Prompt engineering, Evaluation methodologies, Metrics for AI systems, Version control, CI/CD, Modern software development practices, Claude, Frontier AI models, Machine learning, NLP, A/B testing, Experimentation frameworks, AI safety, Alignment considerations, Tools and infrastructure for ML/AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. Their team is a group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107121008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>4e0b9271-cdd</externalid>
      <Title>Research Engineer / Scientist, Alignment Science</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you&#39;ll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.</p>
<p>Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...</p>
<ul>
<li><strong>Scalable Oversight:</strong> Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.</li>
</ul>
<ul>
<li><strong>AI Control:</strong> Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</li>
</ul>
<ul>
<li><strong>Alignment Stress-testing</strong> <strong>:</strong> Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.</li>
</ul>
<ul>
<li><strong>Automated Alignment Research:</strong> Building and aligning a system that can speed up &amp; improve alignment research.</li>
</ul>
<ul>
<li><strong>Alignment Assessments</strong>: Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card), misalignment-risk safety cases, and coordination with third-party evaluators.</li>
</ul>
<ul>
<li><strong>Safeguards Research</strong>: Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment.</li>
</ul>
<ul>
<li><strong>Model Welfare:</strong> Investigating and addressing potential model welfare, moral status, and related questions. See our program announcement and welfare assessment in the Claude 4 system card for more.</li>
</ul>
<p>_Note: For this role, we conduct all interviews in Python and prefer candidates to be based in the Bay Area._</p>
<p><strong>Representative projects:</strong></p>
<ul>
<li>Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subvertinng our interventions.</li>
</ul>
<ul>
<li>Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.</li>
</ul>
<ul>
<li>Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.</li>
</ul>
<ul>
<li>Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.</li>
</ul>
<ul>
<li>Contribute ideas, figures, and writing to research papers, blog posts, and talks.</li>
</ul>
<ul>
<li>Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have significant software, ML, or research engineering experience</li>
</ul>
<ul>
<li>Have some experience contributing to empirical AI research projects</li>
</ul>
<ul>
<li>Have some familiarity with technical AI safety research</li>
</ul>
<ul>
<li>Prefer fast-moving collaborative projects to extensive solo efforts</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Care about the impacts of AI</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience authoring research papers in machine learning, NLP, or AI safety</li>
</ul>
<ul>
<li>Have experience with LLMs</li>
</ul>
<ul>
<li>Have experience with reinforcement learning</li>
</ul>
<ul>
<li>Have experience with Kubernetes clusters and complex shared codebases</li>
</ul>
<p><strong>Candidates need not have:</strong></p>
<ul>
<li>100% of the skills needed to perform the job</li>
</ul>
<ul>
<li>Formal certifications or education credentials</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary:</p>
<p>$350,000 \- $500,000USD</p>
<p><strong><strong>Logistics</strong></strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruits through our website and other job boards, and we will never ask you to pay for any part of the recruitment process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $500,000USD</Salaryrange>
      <Skills>Python, Machine Learning, Research Engineering, AI Safety, Scalable Oversight, AI Control, Alignment Stress-testing, Automated Alignment Research, Alignment Assessments, Safeguards Research, Model Welfare, Experience authoring research papers in machine learning, NLP, or AI safety, Experience with LLMs, Experience with reinforcement learning, Experience with Kubernetes clusters and complex shared codebases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4631822008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c76d0c6d-ec7</externalid>
      <Title>Technical Policy Manager, Cyber Harms</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>
<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>
<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>
<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>
<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>
<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>
<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>
<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>
<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>
<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>
<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>
<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>
<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>
<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>
<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>
<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>
<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>
<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>
<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>
<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>
<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>
<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>
<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>
<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>
<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Background in AI/ML systems, particularly experience with large language models</li>
<li>Experience developing ML-based security systems or adversarial ML research</li>
<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>
<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>
<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>
<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>
<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>The annual compensation for this role is not specified in the job posting.</Salaryrange>
      <Skills>cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust &amp; Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066981008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>453f53c5-e0d</externalid>
      <Title>Research Engineer, AI Observability</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets.</p>
<p>Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.</p>
<p><strong>About the Role</strong></p>
<p>As a Research Engineer on our team, you&#39;ll design and build systems that let AI analyse large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You&#39;ll work across the full stack, from core analysis frameworks through user-facing apps and interfaces.</p>
<p>This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and implement AI-based monitoring systems for AI training and deployment</li>
</ul>
<ul>
<li>Extend and improve core frameworks for processing large volumes of unstructured text</li>
</ul>
<ul>
<li>Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions</li>
</ul>
<ul>
<li>Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings</li>
</ul>
<ul>
<li>Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have 5+ years of software engineering experience, with meaningful exposure to ML systems</li>
</ul>
<ul>
<li>Are excited about the problem of scaling human oversight of AI systems</li>
</ul>
<ul>
<li>Are familiar with LLM application development (context engineering, evaluation, orchestration)</li>
</ul>
<ul>
<li>Enjoy building tools that other people use — you care about UX, reliability, and documentation</li>
</ul>
<ul>
<li>Can context-switch between deep infrastructure work and user-facing product thinking</li>
</ul>
<ul>
<li>Thrive in collaborative, cross-functional environments</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Research experience in AI safety, alignment, or responsible deployment</li>
</ul>
<ul>
<li>Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks</li>
</ul>
<ul>
<li>Experience with productionizing internal tools or building developer-facing platforms</li>
</ul>
<ul>
<li>Background in building monitoring or observability systems</li>
</ul>
<ul>
<li>Comfort with ambiguity — our team is small and growing, and you&#39;ll help define what we become</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>software engineering, ML systems, LLM application development, context engineering, evaluation, orchestration, UX, reliability, documentation, data science, engineering, large-scale data processing frameworks, productionizing internal tools, developer-facing platforms, monitoring, observability systems, research experience in AI safety, alignment, responsible deployment, practical experience with both data science and engineering, experience with productionizing internal tools or building developer-facing platforms, background in building monitoring or observability systems, comfort with ambiguity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. Our team is a group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5125083008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>7cdbf387-4bf</externalid>
      <Title>Security Engineer, Offensive Security</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organisation and keeping the safety and trust of our users at the forefront of everything we do.</p>
<p><strong>What You&#39;ll Do:</strong></p>
<ul>
<li>Conduct red and purple team engagements simulating advanced threat actors across our cloud infrastructure, endpoints and bare metal deployments.</li>
<li>Penetration test specific, high value deployments.</li>
<li>Contribute to AI-assisted security testing tooling and workflows.</li>
<li>Work cross functionally with other security and engineering teams, particularly on AI-specific attack scenarios.</li>
<li>Document and present findings to technical and executive audiences, translating attack narratives into actionable risk insights that inform security roadmaps.</li>
</ul>
<p><strong>Who You Are:</strong></p>
<ul>
<li>5+ years of hands-on experience in red teaming and offensive security operations.</li>
<li>Deep expertise in at least two of: macOS security, Linux Security, Cloud security (GCP/AWS/Azure), Kubernetes, CI/CD pipelines.</li>
<li>Track record of discovering novel attack vectors and chaining vulnerabilities creatively.</li>
<li>Experience conducting adversarial simulations against well-defended environments.</li>
<li>Strong engineering skills (Python, Go, or similar).</li>
<li>Ability to write clear findings that drive action, helping teams understand risk and prioritise fixes.</li>
<li>Collaborative approach, working in close collaboration with the blue team.</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>Prior work at organisations with state actor threat models.</li>
<li>Interest in AI safety and how security engineering contributes to responsible AI developments.</li>
<li>Background testing AI/ML systems or agentic workflows.</li>
<li>Familiarity with detection engineering and SIEM/EDR platforms from the defensive side.</li>
<li>Experience with data centre security or hardware-based attacks.</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional relocation assistance, and a comprehensive benefits package that includes medical, dental, and vision insurance, 401(k) matching, and paid time off.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$300,000 - $320,000USD</Salaryrange>
      <Skills>macOS security, Linux Security, Cloud security (GCP/AWS/Azure), Kubernetes, CI/CD pipelines, Python, Go, AI safety, Detection engineering, SIEM/EDR platforms, Data centre security, Hardware-based attacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5105509008</Applyto>
      <Location>San Francisco, CA, Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c8d7ea06-b25</externalid>
      <Title>Technical CBRN-E Threat Investigator</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are looking for a Technical CBRN-E Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRN-E) threats.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for developing, enhancing, or disseminating CBRN-E weapons, pathogens, toxins, or other threats to harm people, critical infrastructure, or the environment</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover sophisticated CBRN-E threat actors</li>
</ul>
<ul>
<li>Develop CBRN-E-specific detection capabilities, including abuse signals, tracking strategies, and detection methodologies tailored to dual-use research concerns</li>
</ul>
<ul>
<li>Create actionable intelligence reports on CBRN-E attack vectors, vulnerabilities, and threat actor TTPs leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis grounded in real threat actor behavior, open-source research, and publicly reported programs</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, regulatory bodies, scientific organizations, and biosecurity/chemical security research communities</li>
</ul>
<ul>
<li>Inform safety-by-design strategies by forecasting how threat actors may leverage advances in AI technology for CBRN-E purposes</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have deep domain expertise in biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, or related CBRN-E threat domains</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience with threat actor profiling and utilizing threat intelligence frameworks</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be misused for CBRN-E threats</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<p><strong>Strong candidates may also have</strong></p>
<ul>
<li>Advanced degree (MS or PhD) in biological sciences, chemistry, biodefense, biosecurity, or related field</li>
</ul>
<ul>
<li>Real-world experience countering weapons of mass destruction or other high-risk asymmetric threats</li>
</ul>
<ul>
<li>Experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Familiarity with synthetic biology, biotechnology, or dual-use research</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000 - $290,000USD</Salaryrange>
      <Skills>SQL, Python, CBRN-E threat domains, biosecurity, chemical defense, biological weapons non-proliferation, dual-use research of concern (DURC), synthetic biology, threat actor profiling, threat intelligence frameworks, large language models, AI technology, stakeholder management, advanced degree in biological sciences, chemistry, biodefense, biosecurity, or related field, real-world experience countering weapons of mass destruction or other high-risk asymmetric threats, experience working with government agencies or in regulated environments dealing with sensitive CBRN-E information, background in AI safety, machine learning security, or technology abuse investigation, familiarity with synthetic biology, biotechnology, or dual-use research, experience building and scaling threat detection systems or abuse monitoring programs, active Top Secret security clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5066997008</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>25ef9793-8f6</externalid>
      <Title>Technical Recruiter, Specialized</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the role:</strong></p>
<p>Anthropic is looking for a talented Technical Recruiter to partner with our Core Tech teams. In this pivotal role, you will be instrumental in shaping the future of our organisation by identifying, engaging, and hiring the best and brightest minds across a range of disciplines. As we continue to push the boundaries of AI research and development, we need a passionate recruiter who can help us build a world-class team dedicated to creating safe and beneficial AI systems.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop and execute strategic recruiting plans to identify, source, and hire highly qualified candidates, with a focus on Machine Learning and AI talent</li>
<li>Partner with hiring managers and interviewers to understand hiring needs, team matching, required skills and qualifications</li>
<li>Enhance and implement recruiting processes and programs while maintaining an inclusive and high talent bar, such as developing targeted outreach campaigns, building connections with industry leaders, and removing any unfair biases from the hiring process</li>
<li>Collaborate with leadership and cross-functional partners to understand organisational needs and map out long-term talent acquisition strategies that balance priorities across all technical teams</li>
<li>Enhance Anthropic&#39;s employer brand within the research and science community to showcase our mission, culture, and values to candidates</li>
<li>Stay up-to-date on recruiting best practices, emerging sourcing techniques, interview innovations, and workplace trends</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 5+ years of experience in full life cycle recruiting supporting core technical teams</li>
<li>Have a passion for AI&#39;s potential to positively impact the world and realistic assessment of its risks and limitations</li>
<li>Are experimental and are open to new, creative recruiting ideas, or have experience working with hiring managers who are open to non-traditional talent strategies</li>
<li>Thrive in fast-paced, dynamic environments and enjoy juggling multiple priorities</li>
<li>Possess strong technical aptitude with the ability to understand and evaluate technical qualifications</li>
<li>Have enthusiasm for deeply understanding the needs of engineers and innovating on recruiting processes to make them more tailored to the world of AI</li>
<li>Have excellent organisational skills and attention to detail, as well as a proactive mindset and ability to operate with autonomy</li>
<li>Have experience partnering with engineers and hiring talent that work on GenAI and LLMs</li>
<li>Have a proven track record of scaling and building diverse and high-performing teams in a fast-paced, high-growth startup environment</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Bring a deep interest in AI safety</li>
<li>Have experience partnering with engineers and hiring talent that work on GenAI and LLMs</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 - $295,000USD</Salaryrange>
      <Skills>Machine Learning, AI talent, Recruiting, Sourcing, Interviewing, Talent acquisition, Strategic planning, Team management, Communication, Problem-solving, GenAI, LLMs, AI safety, Collaboration, Innovation, Autonomy, Proactivity, Attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation that aims to create reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4935314008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a97094d0-e90</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
<li>Develop tools to measure and improve model performance across various dimensions</li>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
<li>Debug complex issues in training pipelines and model behavior</li>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
<li>Adapt quickly to changing priorities</li>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
<li>Have strong software engineering skills with experience building complex ML systems</li>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
<li>Are adept at analysing and debugging model training processes</li>
<li>Enjoy collaborating across research and engineering disciplines</li>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with LLMs</li>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems. However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary:</p>
<p>$350,000 - $500,000USD</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $500,000USD</Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Experience with LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It aims to build beneficial AI systems that are safe and beneficial for users and society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4613592008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>45350b41-7eb</externalid>
      <Title>Research Engineer / Scientist, Frontier Red Team (Cyber)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in an era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems, especially ones related to cyberphysical capabilities. See our previous related work on exploits, partnering with Mozilla, and zero days. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>In the last year, we&#39;ve seen compelling signs that LLMs and agents are increasingly capable of novel cyber capabilities. We think 2026 will be the year where models reach expert-level, even superhuman, in several cybersecurity domains. This is a novel and massive threat surface.</p>
<p>As a Research Scientist on FRT focusing on cyber, you&#39;ll build the tools and frameworks needed to defend the world against advanced AI-enabled cyber threats. Senior candidates will have the opportunity to shape and grow Anthropic&#39;s cyberdefense research program, working with Security, Safeguards, Policy, and other partner teams. This work sits at the intersection of AI capabilities research, cybersecurity, and policy—what we learn directly shapes how Anthropic and the world prepare for AI-enabled cyber threats.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to demonstrations that shape policy discourse, and build the technical defenses that we will need for a future of increasingly powerful AI systems.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting</li>
</ul>
<ul>
<li>Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios</li>
</ul>
<ul>
<li>Design and build infrastructure for evaluating and enabling AI systems to operate in security environments</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Building frameworks and tools that enable AI models to autonomously find and patch vulnerabilities</li>
</ul>
<ul>
<li>Running purple-team simulations where AI defenders compete against AI attackers in network environments</li>
</ul>
<ul>
<li>Pointing autonomous AI systems at real-world security challenges (bug bounties, CTFs etc.) to characterize risks, defensive potential, and compare to human experts</li>
</ul>
<ul>
<li>Building demonstrations of frontier AI cyber capabilities for policy stakeholders</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have deep expertise in cybersecurity or security research</li>
</ul>
<ul>
<li>Are driven to find solutions to complex, high-stakes problems</li>
</ul>
<ul>
<li>Have experience doing technical research with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<ul>
<li>Have proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with offensive security research, vulnerability research, or exploit development</li>
</ul>
<ul>
<li>Research or professional experience applying LLMs to security problems</li>
</ul>
<ul>
<li>Track record in competitive CTFs, bug bounties, or other security-related competitions</li>
</ul>
<ul>
<li>Experience building security tools or automation</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>cybersecurity, security research, LLM-based agents, autonomous systems, Python, software engineering, offensive security research, vulnerability research, exploit development, AI safety research, threat modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. The company has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5076477008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>ca30dbae-0f6</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
</ul>
<ul>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
</ul>
<ul>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
</ul>
<ul>
<li>Develop tools to measure and improve model performance across various dimensions</li>
</ul>
<ul>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
</ul>
<ul>
<li>Debug complex issues in training pipelines and model behavior</li>
</ul>
<ul>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
</ul>
<ul>
<li>Adapt quickly to changing priorities</li>
</ul>
<ul>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
</ul>
<ul>
<li>Have strong software engineering skills with experience building complex ML systems</li>
</ul>
<ul>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
</ul>
<ul>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
</ul>
<ul>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
</ul>
<ul>
<li>Are adept at analyzing and debugging model training processes</li>
</ul>
<ul>
<li>Enjoy collaborating across research and engineering disciplines</li>
</ul>
<ul>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with LLMs</li>
</ul>
<ul>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large language models, ML systems, High-performance computing, LLMs, AI safety, Responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112018008</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>48f07618-377</externalid>
      <Title>Research Engineer, Frontier Red Team (Autonomy)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Red Team (FRT) is a small, focused technical research team within Anthropic&#39;s Policy organization. Our goal is to make the entire world safer in this era of advanced AI by understanding what these systems can do and building the defenses that matter.</p>
<p>In 2026, we&#39;re focused on researching and ensuring safety with self-improving, highly autonomous AI systems—especially ones with cyberphysical capabilities. See our previous related work on cyberdefense, robotics, and Project Vend. This is early-stage, high-conviction research with the potential for outsized impact.</p>
<p><strong>About the Role</strong></p>
<p>Our team is focused on a critical question: how do we defend against a world where powerful, autonomous, self-improving AI systems may be used adversarially?</p>
<p>As a Research Engineer on our team, you&#39;ll build and eval model organisms of autonomous systems and develop the defensive agents needed to counter them. This work sits at the intersection of AI capabilities research, security, and policy—what we learn directly shapes how Anthropic and the world prepare for advanced AI.</p>
<p>This is applied research with real-world stakes. Your work will inform decisions at the highest levels of the company, contribute to public demonstrations that shape policy discourse, and help build technical defenses that could matter enormously as AI systems become more capable.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and build autonomous AI systems that can use tools and operate across diverse environments—creating model organisms that help us understand and defend against advanced adversarial AI</li>
</ul>
<ul>
<li>Create evals and training environments to understand and shape agent behavior in desirable ways</li>
</ul>
<ul>
<li>Develop defensive agents that can detect, disrupt, or outcompete adversarial AI systems in realistic scenarios</li>
</ul>
<ul>
<li>Interface Claude with hardware platforms (e.g. robotics, physical systems) to understand cyberphysical risks and defenses</li>
</ul>
<ul>
<li>Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public</li>
</ul>
<ul>
<li>Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions</li>
</ul>
<p><strong>Sample Projects</strong></p>
<ul>
<li>Developing systems where Claude controls diverse hardware and robotics platforms simultaneously</li>
</ul>
<ul>
<li>Creating attack-defend simulations (CTFs, wargames, adversarial games) to test defensive AI capabilities</li>
</ul>
<ul>
<li>Designing and implementing RL environments for training defensive agents</li>
</ul>
<ul>
<li>Pointing autonomous systems at real-world security challenges to characterize risks and develop mitigations</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have strong software engineering skills, particularly in Python</li>
</ul>
<ul>
<li>Have experience building and working with LLM-based agents or autonomous systems</li>
</ul>
<ul>
<li>Are driven to find solutions to ambiguously scoped, high-stakes problems</li>
</ul>
<ul>
<li>Design and run experiments quickly, iterating fast toward useful results</li>
</ul>
<ul>
<li>Thrive in collaborative environments (we love pair programming!)</li>
</ul>
<ul>
<li>Care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI</li>
</ul>
<ul>
<li>Can own entire problems end-to-end, including both technical and non-technical components</li>
</ul>
<ul>
<li>Are comfortable working on sensitive projects that require discretion and integrity</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience with reinforcement learning, self-play, or multi-agent systems</li>
</ul>
<ul>
<li>Experience with robotics, hardware interfaces, or cyberphysical systems</li>
</ul>
<ul>
<li>Track record of building demos or prototypes that communicate complex technical ideas</li>
</ul>
<ul>
<li>Experience working with external stakeholders (policymakers, government, researchers)</li>
</ul>
<ul>
<li>Familiarity with AI safety research and threat modeling for advanced AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiters to help us find the best candidates for our open roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $850,000USD</Salaryrange>
      <Skills>Python, LLM-based agents, Autonomous systems, Reinforcement learning, Self-play, Multi-agent systems, Robotics, Hardware interfaces, Cyberphysical systems, AI safety research, Threat modeling, Software engineering, Collaborative environments, AI safety, Discretion and integrity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5067100008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c1277210-c05</externalid>
      <Title>Model Policy Manager, Chemical &amp; Biological Risk</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>Estimated Base Salary $207K – $295K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p>The Model Policy team aligns model behavior with desired human values and norms. We co-design policy _with_ models and _for_ models by driving rapid policy taxonomy iteration based on data and defining evaluation criteria for foundational models’ ability to reason about safety. Key focus areas include: catastrophic risk, mental health, teen safety and multimodal safety.</p>
<p><strong>About the Role</strong></p>
<p>Providing access to frontier AI systems raises complex questions around dual-use science and catastrophic risk. How should models respond to requests involving chemical synthesis, biological experimentation, or pathogen research? Where is the boundary between legitimate scientific inquiry and information that could enable misuse? How do we design policies that meaningfully reduce risk without unnecessarily restricting beneficial research?</p>
<p>This is a senior role in which you’ll help shape policy creation and development at OpenAI for addressing biological and chemical risks. You will develop structured policy frameworks and taxonomies to guide safe model behavior. This role sits at the intersection of biosecurity expertise, AI safety research, and policy design. You will help ensure that frontier AI systems can support beneficial life sciences research, such as drug discovery, public health, and biosafety, while reducing the risk that these capabilities could be misused.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Design and maintain model policies governing chemical and biological risk, defining how models should safely handle dual-use scenarios.</li>
<li>Develop structured taxonomies of chemical and biological risk that inform model training data, evaluation benchmarks, and safety monitoring systems.</li>
<li>Translate biosecurity and chemical security expertise into actionable model behavior, working closely with research and engineering teams to operationalize policy in training and evaluation pipelines.</li>
<li>Develop a broad range of subject matter expertise while maintaining agility across topics.</li>
<li>Identify emerging risk vectors where frontier AI capabilities could meaningfully lower barriers to harmful activity and develop mitigation strategies.</li>
<li>Engage with internal and external subject-matter experts in biosecurity, biodefense, and chemical safety to ensure policies reflect real-world risk landscapes.</li>
</ul>
<p>You might thrive in this role if you:</p>
<ul>
<li>Have strong domain expertise in chemistry, biology, biosecurity, or related fields and are motivated to translate that expertise into principled, operational policies that scale to frontier AI systems.</li>
<li>Have experience researching or working with LLMs, machine learning, AI governance, technology policy, or related areas, and enjoy tackling structured reasoning and classification problems—such as defining boundaries between legitimate scientific inquiry and potentially harmful applications.</li>
<li>Have experience designing, refining, or enforcing policies or safeguards for complex systems, whether in AI/ML environments, scientific research governance, national security contexts, or other high-stakes technical domains.</li>
<li>Are comfortable navigating ambiguous, high-stakes problem spaces, balancing risk reduction with the benefits of scientific openness and innovation.</li>
<li>Enjoy building new frameworks from first principles, reasoning about open-ended problems, and generating novel approaches under uncertainty. You take ownership of problems end-to-end—from defining the conceptual framework through collaborating with research and engineering teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207K – $295K</Salaryrange>
      <Skills>chemistry, biology, biosecurity, AI safety research, policy design, machine learning, LLMs, AI governance, technology policy, structured reasoning, classification problems, policy creation, development, biosecurity expertise, chemical security expertise</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and deploying artificial general intelligence (AGI) in a safe and beneficial way. The company was founded in 2015 and has since grown to become one of the leading AI research and development organizations in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/6df6a3d8-c72b-4e65-acf8-a5d91559533c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>0b39d0db-e3b</externalid>
      <Title>Product Manager, Safety Systems</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Product Manager, Safety Systems</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Product Management</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>Safety Systems manages the complete lifecycle of safety efforts for OpenAI’s frontier models, ensuring our models are deployed responsibly and have a positive impact on society. Our work spans diverse research and engineering initiatives—from system-level safeguards and model training to evaluation and red-teaming—all aimed at mitigating misuse and maintaining our high bar for safety. We lead OpenAI&#39;s commitment to developing and deploying safe Artificial General Intelligence (AGI), fostering a culture of trust, responsibility, and transparency.</p>
<p>Our goal is to continuously learn from deployments, distribute AI’s benefits widely, and ensure that powerful tools remain aligned with human values and safety considerations.</p>
<p><strong>About the Role</strong></p>
<p>As a Product Manager on the Safety Systems team, you will drive initiatives which ensure that OpenAI’s frontier model deployments are safe, impactful, and aligned with user needs and technical innovation. You will clarify strategic priorities, develop safety-focused product roadmaps, and collaborate closely with AI researchers, software engineers, policy experts, and cross-functional partners. This role suits a proactive, technically skilled product manager adept at adversarial thinking and excited to tackle challenging, ambiguous problems through structured analysis and collaborative decision-making.</p>
<p>This position is based in San Francisco, CA, with relocation assistance available.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Partner closely with research, engineering, data science, policy teams, and other stakeholders to embed safety throughout the development and deployment of frontier AI models.</li>
</ul>
<ul>
<li>Develop comprehensive frameworks for understanding and mitigating deployment safety risks, drawing on data analysis, expert consultation, and adversarial assessments.</li>
</ul>
<ul>
<li>Define strategic priorities and product roadmaps focused on improving deployment safety, enhancing reliability, and managing emerging AI capabilities.</li>
</ul>
<ul>
<li>Create scalable methodologies, tools, and processes for evaluating, refining, and continuously improving our safety systems.</li>
</ul>
<ul>
<li>Establish repeatable processes to integrate cutting-edge AI safety research into OpenAI’s models and product offerings.</li>
</ul>
<ul>
<li>Develop and continuously refine clear, actionable metrics that effectively capture safety performance and user experience at scale.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years of product management or related industry roles, with specific expertise in AI safety, trust &amp; safety, integrity, or related domains.</li>
</ul>
<ul>
<li>Are deeply curious and interested in interdisciplinary fields such as human-computer interaction, psychology, philosophy, or similar areas.</li>
</ul>
<ul>
<li>Have hands-on experience driving consensus and action in ambiguous spaces.</li>
</ul>
<ul>
<li>Excel at identifying and challenging underlying assumptions and constraints through insightful questioning.</li>
</ul>
<ul>
<li>Are highly effective at cross-functional collaboration and communicating complex technical concepts clearly and persuasively.</li>
</ul>
<ul>
<li>Enjoy working in a fast-paced, high-growth environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$293K – $385K • Offers Equity</Salaryrange>
      <Skills>product management, AI safety, trust &amp; safety, integrity, human-computer interaction, psychology, philosophy, data analysis, expert consultation, adversarial assessments, strategic priorities, product roadmaps, cross-functional collaboration, communication, adversarial thinking, structured analysis, collaborative decision-making, fast-paced environment, high-growth environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f6c971e8-453f-4fd1-acc8-aa57e8bd4007</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>01a10ada-f52</externalid>
      <Title>Technical Lead, Safety Research</Title>
      <Description><![CDATA[<p><strong>Technical Lead, Safety Research</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$460K – $555K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p><strong>About the Role</strong></p>
<p>As a tech lead, you will be responsible for developing our strategy in new directions to address potential harms from misalignment or significant mistakes. This will in practice include:</p>
<ul>
<li>Setting north star goals and milestones for new research directions, and developing challenging evaluations to track progress.</li>
</ul>
<ul>
<li>Personally driving or leading research in new exploratory directions to demonstrate feasibility and scalability of the approaches.</li>
</ul>
<ul>
<li>Working horizontally across safety research and related teams to ensure different technical approaches work together to achieve strong safety results.</li>
</ul>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Set the research directions and strategies to make our AI systems safer, more aligned and more robust.</li>
</ul>
<ul>
<li>Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&amp;S, policy and related alignment teams, to ensure that our AI meets the highest safety standards.</li>
</ul>
<ul>
<li>Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.</li>
</ul>
<ul>
<li>Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.</li>
</ul>
<ul>
<li>Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness &amp; biases.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Possess experience in safety work for AI model deployment</li>
</ul>
<ul>
<li>Have an in-depth understanding of deep learning research and/or strong engineering skills.</li>
</ul>
<ul>
<li>Are a team player who enjoys collaborative work environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$460K – $555K</Salaryrange>
      <Skills>AI safety, RLHF, adversarial training, robustness, fairness &amp; biases, deep learning research, engineering skills, team player, collaborative work environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a privately held company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/273b4c99-273e-4a70-aff9-19c0d959dcef</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>28cb565e-69a</externalid>
      <Title>Researcher, Health AI</Title>
      <Description><![CDATA[<p><strong>Researcher, Health AI</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models towards their deployment in the real world.</p>
<p>OpenAI’s charter calls on us to ensure the benefits of AI are distributed widely. Our Health AI team is focused on enabling universal access to high-quality medical information. We work at the intersection of AI safety research and healthcare applications, aiming to create trustworthy AI models that can assist medical professionals and improve patient outcomes.</p>
<p><strong>About the Role</strong></p>
<p>We’re seeking strong researchers who are passionate about advancing AI safety and improving global health outcomes. As a Research Scientist, you will contribute to the development of safe and effective AI models for healthcare applications. You will implement practical and general methods to improve the behavior, knowledge, and reasoning of our models in these settings. This will require research into safety and alignment techniques that we aim to generalize towards safe and beneficial AGI.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and apply practical and scalable methods to improve safety and reliability of our models, including RLHF, automated red teaming, scalable oversight, etc.</li>
</ul>
<ul>
<li>Evaluate methods using health-related data, ensuring models provide accurate, reliable, and trustworthy information.</li>
</ul>
<ul>
<li>Build reusable libraries for applying general alignment techniques to our models.</li>
</ul>
<ul>
<li>Proactively understand the safety of our models and systems, identifying areas of risk.</li>
</ul>
<ul>
<li>Work with cross-team stakeholders to integrate methods in core model training and launch safety improvements in OpenAI’s products.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of ensuring AGI is universally beneficial and are aligned with OpenAI’s charter.</li>
</ul>
<ul>
<li>Demonstrate passion for AI safety and improving global health outcomes.</li>
</ul>
<ul>
<li>Have 4+ years of experience with deep learning research and LLMs, especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.</li>
</ul>
<ul>
<li>Stay goal-oriented instead of method-oriented, and are not afraid of unglamorous but high-value work when needed.</li>
</ul>
<ul>
<li>Possess experience making practical model improvements for AI model deployment.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Are a team player who enjoys collaborative work environments.</li>
</ul>
<ul>
<li>Bonus: possess experience in health-related AI research or deployments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$295K – $445K • Offers Equity</Salaryrange>
      <Skills>Deep learning research, LLMs, RLHF, Automated red teaming, Scalable oversight, Health-related data, AI safety research, Healthcare applications, Trustworthy AI models, Medical professionals, Patient outcomes, Ph.D. or other degree in computer science, AI, machine learning, or a related field, Team player, Collaborative work environments, Health-related AI research or deployments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/bcbe08e3-9593-431d-bc99-37e35e035742</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>61280fe7-04a</externalid>
      <Title>Researcher, Interpretability</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Researcher, Interpretability</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Interpretability team studies internal representations of deep learning models. We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations. We are particularly interested in applying our understanding to ensure the safety of powerful AI systems. Our working style is collaborative and curiosity-driven.</p>
<p><strong>About the Role</strong></p>
<p>OpenAI is seeking a researcher passionate about understanding deep networks, with a strong background in engineering, quantitative reasoning, and the research process. You will develop and carry out a research plan in mechanistic interpretability, in close collaboration with a highly motivated team. You will play a critical role in helping OpenAI ensure future models remain safe even as they grow in capability. This will make a significant impact on our goal of building and deploying safe AGI.</p>
<p>In this role, you will:</p>
<ul>
<li>Develop and publish research on techniques for understanding representations of deep networks.</li>
</ul>
<ul>
<li>Engineer infrastructure for studying model internals at scale.</li>
</ul>
<ul>
<li>Collaborate across teams to work on projects that OpenAI is uniquely suited to pursue.</li>
</ul>
<ul>
<li>Guide research directions toward demonstrable usefulness and/or long-term scalability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of ensuring AGI benefits all of humanity, and are aligned with OpenAI’s charter.</li>
</ul>
<ul>
<li>Show enthusiasm for long-term AI safety, and have thought deeply about technical paths to safe AGI.</li>
</ul>
<ul>
<li>Bring experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines.</li>
</ul>
<ul>
<li>Hold a Ph.D. or have research experience in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Thrive in environments involving large-scale AI systems, and are excited to make use of OpenAI’s unique resources in this area.</li>
</ul>
<ul>
<li>Possess 2+ years of research engineering experience and proficiency in Python or similar languages.</li>
</ul>
<ul>
<li>Are deeply curious.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$295K – $445K • Offers Equity</Salaryrange>
      <Skills>Python, Machine Learning, Deep Learning, Research Engineering, Computer Science, AI Safety, Mechanistic Interpretability, Quantitative Reasoning, Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c44268f1-717b-4da3-9943-2557f7d739f0</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4b87ca93-5dc</externalid>
      <Title>Senior Research Engineer - Data</Title>
      <Description><![CDATA[<p><strong>Senior Research Engineer - Data</strong></p>
<p><strong>About the role</strong></p>
<p>The Data team manages the complete lifecycle of data for researchers - from sourcing and large-scale processing to delivering datasets that power our models. Data sits at the heart of our Research efforts and enables all other teams. As part of the Data team, you’ll work with over a million hours of video and audio data.</p>
<p><strong>This role exists at the intersection of applied research, data engineering, and ML infrastructure rather than being a traditional research position</strong>.</p>
<p>You’ll build the world’s best human-centric data lake by collaborating closely with our model training teams. By understanding their requirements, you’ll extract new features and annotations that elevate our datasets. You should be passionate about enhancing model performance through high-quality, accurate datasets. Our infrastructure and pipelines are in great shape, and this role provides room to not only enhance them but also influence the team’s longer-term strategy.</p>
<p><strong>What we&#39;re looking for:</strong></p>
<ul>
<li>A strong background in data-centric, applied Machine Learning, with hands-on experience improving model performance through data quality, curation, labeling, and evaluation rather than model architecture alone</li>
</ul>
<ul>
<li>Experience working on the data layer of Generative AI products, particularly involving images, video, or audio</li>
</ul>
<ul>
<li>Excellent Python skills, with a strong focus on writing clean, maintainable, and well-tested code</li>
</ul>
<ul>
<li>Hands-on experience designing, building, and operating workflow orchestration systems and large-scale data processing pipelines</li>
</ul>
<p><strong>Why join us?</strong></p>
<p>We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why,</p>
<p><strong>Our culture</strong></p>
<p>At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. <strong>You can find out more about these principles here.</strong></p>
<p><strong>Serving 50,000+ customers (and 50% of the Fortune 500)</strong></p>
<p>We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.</p>
<p><strong>Proprietary AI technology</strong></p>
<p>Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world-class AI researchers and engineers. Learn more about our AI Research Lab and the team behind.</p>
<p><strong>AI Safety, Ethics and Security</strong></p>
<p>AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence&#39;s impact on our society is still unfolding, our position is clear: <strong>People first. Always.</strong>  Learn more about our commitments to AI Ethics, Safety &amp; Security.</p>
<p><strong>The good stuff...</strong></p>
<ul>
<li>Competitive compensation (salary + stock options + bonus)</li>
</ul>
<ul>
<li>Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.</li>
</ul>
<ul>
<li>25 days of annual leave + public holidays</li>
</ul>
<ul>
<li>Great company culture with the option to join regular planning and socials at our hubs</li>
</ul>
<ul>
<li>\+ other benefits depending on your location</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Competitive compensation (salary + stock options + bonus)</Salaryrange>
      <Skills>Python, Machine Learning, Data Engineering, Workflow Orchestration, Large-Scale Data Processing, Generative AI, AI Research, AI Ethics, AI Safety</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.io.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. The company is headquartered in London, with offices and teams across Europe and the US.</Employerdescription>
      <Employerwebsite>https://www.synthesia.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/aa69627f-0c29-4416-b0e5-87bef74c768c</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1d33ea55-7f5</externalid>
      <Title>Researcher, Robustness &amp; Safety Training</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p>The Model Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models, and to leverage these advances to make OpenAI’s deployed models safe and beneficial.  This requires a breadth of new ML research to address the growing set of safety challenges as AI becomes more powerful and used in more settings.  Key focus areas include how to enforce nuanced safety policies without trading off helpfulness and capabilities, how to make the model robust to adversaries, how to address privacy and security risks, and how to make the model trustworthy in safety-critical domains.</p>
<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>OpenAI is seeking a senior researcher with passion for AI safety and experience in safety research. Your role will set directions for research to enable and empower safe AGI and work on research projects to make our AI systems safer, more aligned and more robust to adversarial or malicious use cases. You will play a critical role in shaping how a safe AI system should look like in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.</li>
</ul>
<ul>
<li>Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.</li>
</ul>
<ul>
<li>Set the research directions and strategies to make our AI systems safer, more aligned and more robust.</li>
</ul>
<ul>
<li>Coordinate and collaborate with cross-functional teams, including T&amp;S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.</li>
</ul>
<ul>
<li>Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness &amp; biases.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Possess experience in safety work for AI model deployment</li>
</ul>
<ul>
<li>Have an in-depth understanding of deep learning research and/or strong engineering skills.</li>
</ul>
<ul>
<li>Are a team player who enjoys collaborative work environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $445K</Salaryrange>
      <Skills>AI safety, RLHF, adversarial training, robustness, fairness &amp; biases, deep learning research, engineering skills, computer science, machine learning, related field</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2560ed50-5535-42b8-b069-9ebc28ce7493</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>18a83d32-ae1</externalid>
      <Title>Researcher, Safety Oversight</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Researcher, Safety Oversight</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society, and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p>The Safety Oversight Research team aims to fundamentally advance our capabilities to maintain oversight over frontier AI models, and leverage these advances to ensure OpenAI’s deployed models are safe and beneficial. This requires a breadth of new ML research in the areas of human-AI collaboration, reasoning, robustness, and scalable oversight to keep pace with model capabilities.  We invest heavily in developing novel model and system-level methods of identifying and mitigating AI misuse and misalignment.</p>
<p>Our goal is to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely.</p>
<p><strong>About the Role</strong></p>
<p>OpenAI is seeking a senior researcher with a passion for AI safety and experience in safety research. Your role will set directions for research to maintain effective oversight of safe AGI and work on research projects to identify and mitigate misuse and misalignment in our AI systems. You will play a critical role in defining how a safe AI system should look in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI.</p>
<p>In this role, you will:</p>
<ul>
<li>Develop and refine AI monitor models to detect and mitigate known and emerging patterns of misuse and misalignment.</li>
</ul>
<ul>
<li>Set research directions and strategies to make our AI systems safer, more aligned, and more robust.</li>
</ul>
<ul>
<li>Evaluate and design effective red-teaming pipelines to examine the end-to-end robustness of our safety systems, and identify areas for future improvement.</li>
</ul>
<ul>
<li>Conduct research to improve models’ ability to reason about questions of human values, and apply these improved models to practical safety challenges.</li>
</ul>
<ul>
<li>Coordinate and collaborate with cross-functional teams, including T&amp;S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use.</li>
</ul>
<ul>
<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, human-AI collaboration, fairness &amp; biases.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Thrive in environments involving large-scale AI systems.</li>
</ul>
<ul>
<li>Possess 4+ years of research engineering experience and proficiency in Python or similar languages.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $445K • Offers Equity</Salaryrange>
      <Skills>AI safety, RLHF, human-AI collaboration, fairness &amp; biases, Python, research engineering, machine learning, computer science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/9b11373c-1643-4ea6-bbcd-033d5b8a0d3e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>bd0e1e90-d4b</externalid>
      <Title>Researcher, Trustworthy AI</Title>
      <Description><![CDATA[<p><strong>Researcher, Trustworthy AI</strong></p>
<p><strong>About the team</strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p><strong>About the role</strong></p>
<p>We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable.</p>
<p><strong>In this role, you will enable:</strong></p>
<ul>
<li>Set research and strategies to study societal impacts of our models in an action-relevant manner and figure out how to tie this back into model design</li>
</ul>
<ul>
<li>Build creative methods and run experiments that enable public input into model values</li>
</ul>
<ul>
<li>Increasing rigor of external assurances by turning external findings into robust evaluations</li>
</ul>
<ul>
<li>Facilitating and growing our ability to effectively de-risk flagship model deployments in a timely manner</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Possess 3+ years of research experience (industry or similar academic experience) and proficiency in Python or similar languages</li>
</ul>
<ul>
<li>Thrive in environments involving large-scale AI systems and multimodal datasets</li>
</ul>
<ul>
<li>Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment</li>
</ul>
<ul>
<li>Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, LLM evaluations</li>
</ul>
<ul>
<li>Have past experience in interdisciplinary research</li>
</ul>
<ul>
<li>Show enthusiasm for socio-technical topics</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Salary</strong></p>
<ul>
<li>$380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees,</p>
<p><strong>Experience Level</strong></p>
<p>entry</p>
<p><strong>Employment Type</strong></p>
<p>full-time</p>
<p><strong>Workplace Type</strong></p>
<p>hybrid</p>
<p><strong>Category</strong></p>
<p>Engineering</p>
<p><strong>Industry</strong></p>
<p>Technology</p>
<p><strong>Salary Range</strong></p>
<p>$380K • Offers Equity</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Research experience</li>
</ul>
<ul>
<li>AI safety</li>
</ul>
<ul>
<li>RLHF</li>
</ul>
<ul>
<li>Adversarial training</li>
</ul>
<ul>
<li>Robustness</li>
</ul>
<ul>
<li>LLM evaluations</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Interdisciplinary research</li>
</ul>
<ul>
<li>Socio-technical topics</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K • Offers Equity</Salaryrange>
      <Skills>Python, Research experience, AI safety, RLHF, Adversarial training, Robustness, LLM evaluations, Interdisciplinary research, Socio-technical topics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/71acba5c-dbae-406f-b983-f40943c43068</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a426da4b-6d6</externalid>
      <Title>Technical Program Manager, Trustworthy AI</Title>
      <Description><![CDATA[<p><strong>Technical Program Manager, Trustworthy AI</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Technical Program Management</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$207K – $335K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Trustworthy AI team is investing in external assurances to build out a robust safety ecosystem. This includes enabling third party assessments for OpenAI’s flagship launches, piloting new assurance mechanisms like safety compliance reviews, and incorporating independent expert input as evidence for critical safety decisions. This role requires a blend of partnership management, cross functional coordination, an understanding of AI safety research and evaluations, and strong communication skills to synthesize findings and translate into decision relevant actions.</p>
<p><strong>About the Role</strong></p>
<p>As a Technical Program Manager on the Trustworthy AI team, you will drive interdisciplinary programs in collaboration with external partners. This includes strategizing and executing on the vision for strategic research partnerships, and growing and managing our external assurance programs which include third party assessments and enabling independent academic research.</p>
<p>We’re looking for people who have experience running strategic academic collaborations and program management with technical and research teams. You will work with researchers/engineers both internal to OpenAI and as a part of the external community to initiate new projects, set ambitious goals and milestones, and drive execution across multiple teams.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Create strategic research partnerships</li>
</ul>
<ul>
<li>Proactively identify new partners for external assurances such as high quality third party evaluators and academic research labs</li>
</ul>
<ul>
<li>Create feedback mechanisms for translating external research into actionable product and policy recommendations</li>
</ul>
<ul>
<li>Communicate progress, status and risk effectively to stakeholders internally and externally</li>
</ul>
<ul>
<li>Drive tool and process improvements to improve efficiency</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have an understanding of AI evaluations and measurements and ability to engage with technical teams on AI evaluations</li>
</ul>
<ul>
<li>Have experience working with and managing stakeholders external to an organization, especially academic researchers</li>
</ul>
<ul>
<li>Can create executive summaries and synthesis of technical and social science research</li>
</ul>
<ul>
<li>Have worked cross functionally across product, research, and engineering teams</li>
</ul>
<ul>
<li>Have an understanding and interest in frontier AI safety and policy</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207K – $335K • Offers Equity</Salaryrange>
      <Skills>Technical Program Management, Strategic Research Partnerships, External Assurance Programs, AI Safety Research and Evaluations, Cross Functional Coordination, Communication Skills, AI Evaluations and Measurements, Academic Research, Executive Summaries and Synthesis of Technical and Social Science Research, Cross Functional Collaboration, Frontier AI Safety and Policy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2ce31dca-6a6b-4bc3-abab-9b7ed5ba92d5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>119df59e-db7</externalid>
      <Title>Software Engineer, AI Safety</Title>
      <Description><![CDATA[<p><strong>Software Engineer, AI Safety</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$185K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models and their deployment in the real world.</p>
<p>Building on the many years of our practical alignment work and applied safety efforts, Safety Systems addresses emerging safety issues and develops new fundamental solutions to enable the safe deployment of our most advanced models and future AGI, to make AI that is beneficial and trustworthy.</p>
<p>Learn more about OpenAI’s approach to safety</p>
<p><strong>About the Role</strong></p>
<p>At OpenAI, we&#39;re dedicated to advancing artificial intelligence, and we know that creating a secure and reliable platform is vital to our mission. That&#39;s why we&#39;re seeking a software engineer to help us build out our trust and safety capabilities.</p>
<p>In this role, you&#39;ll work with our entire engineering team to design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You&#39;ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Architect, build, and maintain anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.</li>
</ul>
<ul>
<li>Work closely with our other engineers and researchers to utilize both industry standard and novel AI techniques to measure, monitor and improve AI models’ alignment to human values.</li>
</ul>
<ul>
<li>Diagnose and remediate active incidents on the platform and build new tooling and infrastructure that address the root causes of system failure.</li>
</ul>
<p><strong>You might thrive in this role if:</strong></p>
<ul>
<li>You have built and run production services in a high growth, rapidly scaling environment.</li>
</ul>
<ul>
<li>You can debug live issues and restore systems quickly.</li>
</ul>
<ul>
<li>You have worked on content safety, fraud, or abuse, or are motivated and excited to work on present-day (“now-term”) AI safety.</li>
</ul>
<ul>
<li>You have experience with Python or with modern languages such as C++, Rust, or Go, and are able to quickly ramp up on Python.</li>
</ul>
<ul>
<li>You understand the trade-offs of capabilities and risks and navigate them to deploy novel products and features safely.</li>
</ul>
<ul>
<li>You can critically assess risks of a new product or feature and devise innovative solutions to mitigate these risks without harming the product experience.</li>
</ul>
<ul>
<li>You’re pragmatic. You know when to build a quick, good-enough fix, and when to invest in a robust, lasting solution.</li>
</ul>
<ul>
<li>You possess strong project management skills. You are self-directed and can remove roadblocks to drive projects to completion with minimal guidance.</li>
</ul>
<ul>
<li>You’ve deployed classifiers or machine learning models, or are excited to learn about modern ML infra.</li>
</ul>
<p><strong>Our tech stack</strong></p>
<ul>
<li>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills who understand the fundamental problems these tools solve, and can quickly pick up new tools and frameworks.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$185K – $325K • Offers Equity</Salaryrange>
      <Skills>Python, Terraform, Kubernetes, Azure, Postgres, Kafka, C++, Rust, Go, Content safety, Fraud, Abuse, AI safety, Machine learning, Classifiers, ML infra, Project management, Debugging, System administration, Cloud computing, Containerization, DevOps, Agile development, Scrum, Kanban</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/b9dee2a0-9bb3-447e-9bce-2b1bed784e5b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>02b9c0f2-a03</externalid>
      <Title>Software Engineer, Integrity Foundations</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Integrity Foundations</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale.</p>
<p>The Applied Foundations team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability. Integrity Foundations provides the core building blocks and infrastructure for this work.</p>
<p><strong>About the Role</strong></p>
<p>At OpenAI, our mission is to advance AI in a way that is safe, reliable, and aligned with broad societal values. The applied foundations role is crucial for maintaining the trustworthiness of our platforms. You will be pivotal in developing robust defenses against a spectrum of adversarial behaviors that threaten our ecosystem.</p>
<p>In this role, you&#39;ll work with our entire engineering team to design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You&#39;ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Develop and enhance systems to detect and prevent various forms of abuse including financial fraud, botting, and scripting.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to design solutions that protect against and mitigate adversarial attacks without compromising user experience.</li>
</ul>
<ul>
<li>Assist with response to active incidents on the platform and build new tooling and infrastructure that address the fundamental problems.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have at least 3 years of professional software engineering experience.</li>
</ul>
<ul>
<li>Have experience setting up and maintaining production backend services and data pipelines.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Are self-directed and enjoy figuring out the best way to solve a particular problem</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Care about AI Safety in production environments and have the expertise to build software systems that defend against abuse.</li>
</ul>
<ul>
<li>Build tools to accelerate your own workflows, but only when off-the-shelf solutions would not do.</li>
</ul>
<p><strong>Our tech stack</strong></p>
<ul>
<li>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills and the ability to quickly pick up new tools and frameworks.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Terraform, Kubernetes, Azure, Python, Postgres, Kafka, AI Safety, Software Engineering, Backend Services, Data Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/991948b7-0305-4125-bb9a-625f5bc24189</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>