<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>56a72069-e42</externalid>
      <Title>Staff+ Software Engineer, Backend</Title>
      <Description><![CDATA[<p><strong>About the Role\nAnthropic is looking for experienced, product-minded engineers to own the backend systems that power user experiences across our API, Claude Code, and Claude.ai.\n\nYou&#39;ll independently scope complex, multi-month projects through ambiguous problem spaces and lead peers through technical and product decisions; you&#39;ll drive alignment with product, peer engineering teams, and research to identify capability gaps and translate frontier model improvements into shipped products.\n\nYou&#39;ll make architectural decisions that affect the reliability and scalability of systems serving hundreds of thousands of global users (including internal teams), and design processes that help your team operate effectively and never fail the same way twice - all while staying hands-on with the code and our models.\n\n## Responsibilities\n### API Core\nYou&#39;ll build and scale the foundation of the Claude API,the systems that deliver Claude&#39;s intelligence to every developer, from startups to enterprise.\n\nYou&#39;ll own the performance, reliability, and efficiency of our core serving path, ensuring users get the most speed and value from our models.\n\nYou&#39;ll partner closely with inference and safeguards to optimise the full stack.\n\n### API Capabilities\nYou&#39;ll bring frontier model capabilities to developers through the Claude API, owning core features like vision, tool use, and computer use.\n\nYou&#39;ll launch new models and ship the primitives that make Claude more capable with every release.\n\nYou&#39;ll partner directly with research and inference to productionise what&#39;s next.\n\n### API Knowledge\nYou&#39;ll focus on transforming Claude into a true knowledge worker by ensuring the model has access to and understanding of the right knowledge at the right time.\n\nYou&#39;ll work on making it possible for developers to securely give Claude access to their data while automatically processing and retrieving relevant information.\n\nYou&#39;ll partner directly with research to bring state-of-the-art retrieval advancements to developers.\n\n### Developer Experience\nYou’ll focus on building products and tools to enable developers to harness the full power of LLMs to create successful, reliable, and groundbreaking applications with ease.\n\nYou’ll build the tools to accelerate developers from idea to deployment.\n\nYou&#39;ll help figure out how to leverage Claude to improve developer&#39;s usage of the API, such as generating and evaluating prompts while collaborating closely with the teams above to bring Claude&#39;s current and future capabilities to developers.\n\n### API Agents\nYou&#39;ll focus on building the infrastructure and APIs that enable developers to create powerful agentic applications within the Claude API.\n\nYou&#39;ll help developers with agent orchestration through capabilities like tool use, multi-step reasoning, and long-running task execution that allow Claude to take actions and accomplish complex goals on behalf of users.\n\nYou&#39;ll partner with research to bring cutting-edge agent capabilities to production.\n\n### Enterprise Foundations\nWe&#39;re looking for a software engineer to join our Enterprise Foundations team,the team that makes Claude enterprise-ready at scale.\n\nWhen a Fortune 500 company wants to roll out Claude to 100,000 employees, we&#39;re the team that makes it possible.\n\nYou&#39;ll build the foundational systems that large organisations require before they can deploy AI at scale: user and permissions management, security and compliance features, and analytics infrastructure.\n\nThis work directly converts product-market fit into revenue by removing the deployment blockers that prevent large organisations from adopting Claude broadly.\n\n## Requirements\n\n<em>   Have 8+ years of relevant experience as a backend or product engineer, with a track record of leading complex, multi-month projects or teams as a tech lead or equivalent\n\n</em>   Have strong coding fundamentals and are comfortable working across backend systems, APIs, and integrations , and can reach into the frontend when needed to ship an effective solution\n\n<em>   Have led the design and delivery of large-scale backend systems in production that power high-adoption B2B or consumer-facing products\n\n</em>   Are skilled at driving alignment across technical and non-technical teams; you communicate clearly, influence technical decisions beyond your immediate team, and help others ramp effectively on your systems\n\n<em>   Take a product-focused approach to your work and care about building solutions that are robust, scalable, and easy to use\n\n</em>   Care deeply about investing in the mentorship and growth of your peers\n\n<em>   Have experience with distributed systems, API design, and cloud infrastructure at scale\n\n</em>   Thrive in fast-paced environments and can navigate ambiguity to find the highest-leverage path forward\n\n## Preferred Qualifications\n\n<em>   Served as a technical lead or architect on a product or API platform, owning both the technical vision and execution end-to-end\n\n</em>   Experience designing and scaling APIs with a focus on developer experience, consistency, and reliability , including API design review processes\n\n<em>   Deep experience building enterprise SaaS platforms, including permissions infrastructure, billing and pricing systems, or compliance frameworks for regulated industries (SO2, HIPAA)\n\n</em>   Background in a specific industry vertical , financial services, healthcare, or legal technology , with a track record of building products that handle sensitive, domain-specific data\n\n<em>   Experience partnering with ML/AI research teams to productise model capabilities or identify and address model failure modes in production\n\n</em>   Experience building agentic systems, orchestration frameworks, or developer tools , including CLI tools, IDE integrations, or AI-assisted coding environments\n\n<em>   Experience building products where adoption and activation are core challenges , instrumenting funnels, diagnosing drop-off, and shipping the product changes that close gaps\n\n</em>   Experience designing operational processes (incident response, on-call rotations, postmortem review) for production systems serving large-scale developer or enterprise audiences\n\n## Salary\n\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $405,000-$485,000 USD</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>backend, product engineer, API design, cloud infrastructure, distributed systems, API design review processes, permissions infrastructure, billing and pricing systems, compliance frameworks, regulated industries, HIPAA, SO2, ML/AI research teams, model capabilities, model failure modes, agentic systems, orchestration frameworks, developer tools, CLI tools, IDE integrations, AI-assisted coding environments, adoption and activation, funnel instrumentation, drop-off diagnosis, product changes, operational processes, incident response, on-call rotations, postmortem review, technical lead, architect, product platform, API platform, technical vision, execution end-to-end, developer experience, consistency, reliability, enterprise SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174755008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc17980d-461</externalid>
      <Title>Research Engineer, Interpretability</Title>
      <Description><![CDATA[<p>JOB TITLE: Research Engineer, Interpretability \n LOCATION: San Francisco, CA \n DEPARTMENT: AI Research &amp; Engineering \n \n JOB DESCRIPTION: \n \n When you see what modern language models are capable of, do you wonder, &quot;How do these things work? How can we trust them?&quot; \n \n The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe. \n \n Think of us as doing &quot;neuroscience&quot; of neural networks using &quot;microscopes&quot; we build - or reverse-engineering neural networks like binary programs. \n \n More resources to learn about our work: \n - Our research blog - covering advances including Monosemantic Features and Circuits \n - An Introduction to Interpretability from our research lead, Chris Olah \n - The Urgency of Interpretability from CEO Dario Amodei \n - Engineering Challenges Scaling Interpretability - directly relevant to this role \n - 60 Minutes segment - Around 8:07, see a demo of tooling our team built \n - New Yorker article - what it&#39;s like to work on one of AI&#39;s hardest open problems \n \n Even if you haven&#39;t worked on interpretability before, the infrastructure expertise is similar to what&#39;s needed across the lifecycle of a production language model: \n - Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips \n - Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model&#39;s internal activations mid-forward-pass - for example, adding a &quot;steering vector&quot; \n - Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission \n \n The science keeps scaling - and it&#39;s now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI. \n \n RESPONSIBILITIES: \n - Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application \n - Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams \n - Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers \n - Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations \n - Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling \n \n YOU MAY BE A GOOD FIT IF YOU: \n - Have 5-10+ years of experience building software \n - Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python \n - Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks \n - Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions \n - Prefer fast-moving collaborative projects to extensive solo efforts \n - Are curious about interpretability research and its role in AI safety (though no research experience is required!) \n - Care about the societal impacts and ethics of your work \n - Are comfortable working closely with researchers, translating research needs into engineering solutions. \n \n STRONG CANDIDATES MAY ALSO HAVE EXPERIENCE WITH: \n - Optimizing the performance of large-scale distributed systems \n - Language modeling fundamentals with transformers \n - High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization \n - Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs \n - Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges \n \n REPRESENTATIVE PROJECTS: \n - Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations \n - Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them \n - Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization \n - Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research) \n \n ROLE SPECIFIC LOCATION POLICY: \n - This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis. \n \n The annual compensation range for this role is listed below. \n For sales roles, the range provided is the role&#39;s On Target Earnings (\&quot;OTE\&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. \n Annual Salary:\\$315,000-\\$560,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$315,000-$560,000 USD</Salaryrange>
      <Skills>Python, Rust, Go, Java, PyTorch, CUDA, JAX, XLA, High Performance LLM optimization, memory management, compute efficiency, parallelism strategies, inference throughput optimization, large-scale distributed systems, language modeling fundamentals, transformers, collaborating closely with researchers, building tooling to support research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980430008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2ff13306-80c</externalid>
      <Title>Staff+ Software Engineer, Backend</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is looking for experienced, product-minded engineers to own the backend systems that power user experiences across our API, Claude Code, and Claude.ai.</p>
<p>You&#39;ll independently scope complex, multi-month projects through ambiguous problem spaces and lead peers through technical and product decisions; you&#39;ll drive alignment with product, peer engineering teams, and research to identify capability gaps and translate frontier model improvements into shipped products.</p>
<p>You&#39;ll make architectural decisions that affect the reliability and scalability of systems serving hundreds of thousands of global users (including internal teams), and design processes that help your team operate effectively and never fail the same way twice - all while staying hands-on with the code and our models.</p>
<p><strong>Teams</strong></p>
<p>We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organisational needs.</p>
<ul>
<li>API Core: You&#39;ll build and scale the foundation of the Claude API,the systems that deliver Claude&#39;s intelligence to every developer, from startups to enterprise.</li>
<li>API Capabilities: You&#39;ll bring frontier model capabilities to developers through the Claude API, owning core features like vision, tool use, and computer use.</li>
<li>API Knowledge: You&#39;ll focus on transforming Claude into a true knowledge worker by ensuring the model has access to and understanding of the right knowledge at the right time.</li>
<li>Developer Experience: You’ll focus on building products and tools to enable developers to harness the full power of LLMs to create successful, reliable, and groundbreaking applications with ease.</li>
<li>API Agents: You&#39;ll focus on building the infrastructure and APIs that enable developers to create powerful agentic applications within the Claude API.</li>
<li>Enterprise Foundations: We&#39;re looking for a software engineer to join our Enterprise Foundations team,the team that makes Claude enterprise-ready at scale.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have 8+ years of relevant experience as a backend or product engineer, with a track record of leading complex, multi-month projects or teams as a tech lead or equivalent.</li>
<li>Have strong coding fundamentals and are comfortable working across backend systems, APIs, and integrations , and can reach into the frontend when needed to ship an effective solution.</li>
<li>Have led the design and delivery of large-scale backend systems in production that power high-adoption B2B or consumer-facing products.</li>
<li>Are skilled at driving alignment across technical and non-technical teams; you communicate clearly, influence technical decisions beyond your immediate team, and help others ramp effectively on your systems.</li>
<li>Take a product-focused approach to your work and care about building solutions that are robust, scalable, and easy to use.</li>
<li>Care deeply about investing in the mentorship and growth of your peers.</li>
<li>Have experience with distributed systems, API design, and cloud infrastructure at scale.</li>
<li>Thrive in fast-paced environments and can navigate ambiguity to find the highest-leverage path forward.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Served as a technical lead or architect on a product or API platform, owning both the technical vision and execution end-to-end.</li>
<li>Experience designing and scaling APIs with a focus on developer experience, consistency, and reliability , including API design review processes.</li>
<li>Deep experience building enterprise SaaS platforms, including permissions infrastructure, billing and pricing systems, or compliance frameworks for regulated industries (SO2, HIPAA).</li>
<li>Background in a specific industry vertical , financial services, healthcare, or legal technology , with a track record of building products that handle sensitive, domain-specific data.</li>
<li>Experience partnering with ML/AI research teams to productize model capabilities or identify and address model failure modes in production.</li>
<li>Experience building agentic systems, orchestration frameworks, or developer tools , including CLI tools, IDE integrations, or AI-assisted coding environments.</li>
<li>Experience building products where adoption and activation are core challenges , instrumenting funnels, diagnosing drop-off, and shipping the product changes that close gaps.</li>
<li>Experience designing operational processes (incident response, on-call rotations, postmortem review) for production systems serving large-scale developer or enterprise audiences.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>backend systems, APIs, cloud infrastructure, distributed systems, API design, product development, team leadership, communication, influence, mentorship, growth, API design review processes, enterprise SaaS platforms, permissions infrastructure, billing and pricing systems, compliance frameworks, regulated industries, ML/AI research teams, agentic systems, orchestration frameworks, developer tools, CLI tools, IDE integrations, AI-assisted coding environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174755008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>97212bdf-dd1</externalid>
      <Title>Research Engineer, Interpretability</Title>
      <Description><![CDATA[<p>Job Title: Research Engineer, Interpretability</p>
<p>About the Role:</p>
<p>When you see what modern language models are capable of, do you wonder, &quot;How do these things work? How can we trust them?&quot; The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe.</p>
<p>Think of us as doing &quot;neuroscience&quot; of neural networks using &quot;microscopes&quot; we build - or reverse-engineering neural networks like binary programs.</p>
<p>More resources to learn about our work:</p>
<ul>
<li>Our research blog - covering advances including Monosemantic Features and Circuits</li>
</ul>
<ul>
<li>An Introduction to Interpretability from our research lead, Chris Olah</li>
</ul>
<ul>
<li>The Urgency of Interpretability from CEO Dario Amodei</li>
</ul>
<ul>
<li>Engineering Challenges Scaling Interpretability - directly relevant to this role</li>
</ul>
<ul>
<li>60 Minutes segment - Around 8:07, see a demo of tooling our team built</li>
</ul>
<ul>
<li>New Yorker article - what it&#39;s like to work on one of AI&#39;s hardest open problems</li>
</ul>
<p>Even if you haven&#39;t worked on interpretability before, the infrastructure expertise is similar to what&#39;s needed across the lifecycle of a production language model:</p>
<ul>
<li>Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips</li>
</ul>
<ul>
<li>Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model&#39;s internal activations mid-forward-pass - for example, adding a &quot;steering vector&quot;</li>
</ul>
<ul>
<li>Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission</li>
</ul>
<p>The science keeps scaling - and it&#39;s now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application</li>
</ul>
<ul>
<li>Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams</li>
</ul>
<ul>
<li>Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers</li>
</ul>
<ul>
<li>Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations</li>
</ul>
<ul>
<li>Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5-10+ years of experience building software</li>
</ul>
<ul>
<li>Are highly proficient in at least one programming language (e.g., Python, Rust, Go, Java) and productive with Python</li>
</ul>
<ul>
<li>Are extremely curious about unfamiliar domains; can quickly learn and put that knowledge to work, e.g. diving into new layers of the stack to find bottlenecks</li>
</ul>
<ul>
<li>Have a strong ability to prioritize the most impactful work and are comfortable operating with ambiguity and questioning assumptions</li>
</ul>
<ul>
<li>Prefer fast-moving collaborative projects to extensive solo efforts</li>
</ul>
<ul>
<li>Are curious about interpretability research and its role in AI safety (though no research experience is required!)</li>
</ul>
<ul>
<li>Care about the societal impacts and ethics of your work</li>
</ul>
<ul>
<li>Are comfortable working closely with researchers, translating research needs into engineering solutions.</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Optimizing the performance of large-scale distributed systems</li>
</ul>
<ul>
<li>Language modeling fundamentals with transformers</li>
</ul>
<ul>
<li>High Performance LLM optimization: memory management, compute efficiency, parallelism strategies, inference throughput optimization</li>
</ul>
<ul>
<li>Working hands-on in a mainstream ML stack - PyTorch/CUDA on GPUs or JAX/XLA on TPUs</li>
</ul>
<ul>
<li>Collaborating closely with researchers and building tooling to support research teams; or directly performed research with complex engineering challenges</li>
</ul>
<p>Representative Projects:</p>
<ul>
<li>Building Garcon, a tool that allows researchers to easily instrument LLMs to extract internal activations</li>
</ul>
<ul>
<li>Designing and optimizing a pipeline to efficiently collect petabytes of transformer activations and shuffle them</li>
</ul>
<ul>
<li>Profiling and optimizing ML training jobs, including multi-GPU parallelism and memory optimization</li>
</ul>
<ul>
<li>Building a steered inference system that applies targeted interventions to model internals at scale (conceptually similar to Golden Gate Claude but for safety research)</li>
</ul>
<p>Role Specific Location Policy:</p>
<ul>
<li>This role is based in the San Francisco office; however, we are open to considering exceptional candidates for remote work on a case-by-case basis.</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $315,000-$560,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$315,000-$560,000 USD</Salaryrange>
      <Skills>Python, Rust, Go, Java, PyTorch, CUDA, JAX, XLA, Transformers, High Performance LLM optimization, Memory management, Compute efficiency, Parallelism strategies, Inference throughput optimization, Optimizing the performance of large-scale distributed systems, Language modeling fundamentals, Collaborating closely with researchers and building tooling to support research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980430008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5b743bb-d8f</externalid>
      <Title>Product Manager, AI Platforms</Title>
      <Description><![CDATA[<p>The AI Platform Product Manager will drive the strategy and execution of Shield AI&#39;s next-generation autonomy intelligence stack. This PM owns the product vision and roadmap for the Hivemind AI Platform, ensuring we can manufacture, govern, and field advanced world models, robotics foundation models, and vision-language-action systems safely and at scale.</p>
<p>This role sits at the intersection of AI/ML, autonomy, model lifecycle, infrastructure, and product strategy. The PM partners closely with engineering, AI research, Hivemind Solutions, and field teams to deliver the tooling that enables sovereign autonomy, AI Factories at the edge, and continuous learning,capabilities that are central to Shield AI&#39;s strategic direction.</p>
<p>This is a high-impact role for an experienced product leader excited to define how foundation models are trained, validated, governed, and deployed across thousands of autonomous systems in highly contested environments.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>AI Model Development &amp; Training Platform</li>
</ul>
<p>Own the roadmap for foundation model training workflows, including dataset ingestion, curation, labeling, synthetic data generation, domain model training, and distillation pipelines. Define requirements for world models, robotics models, and VLA-based training, evaluation, and specialization. Lead the evolution of MLOps capabilities in Forge, including data lineage, experiment tracking, model versioning, and scalable evaluation suites.</p>
<ul>
<li>Data, Simulation &amp; Synthetic Data Factory</li>
</ul>
<p>Define product requirements for synthetic data generation, simulation-integrated data flywheels, and automated scenario generation. Partner with Digital Twin, Simulation, and autonomy teams to convert natural-language mission inputs into data needs, training procedures, and model variants.</p>
<ul>
<li>Safe Deployment &amp; Model Governance</li>
</ul>
<p>Lead the development of model governance and auditability tooling, including model cards, dataset rights, lineage tracking, safety gates, and compliance evidence. Build guardrails and workflows to safely deploy models onto edge hardware in disconnected, GPS- or comms-denied environments. Partner with Safety, Certification, Cyber, and Engineering teams to ensure traceability and evaluation pipelines meet operational and accreditation requirements.</p>
<ul>
<li>Edge Deployment &amp; AI Factory Integration</li>
</ul>
<p>Partner with Pilot, EdgeOS, and hardware teams to integrate foundation-model-based perception and reasoning into autonomy behaviors. Define requirements for distillation, quantization, and inference tooling as part of the “three-computer” development and deployment model. Ensure closed-loop workflows between cloud model training and edge-native execution.</p>
<ul>
<li>Cross-Functional Leadership</li>
</ul>
<p>Collaborate with Engineering, Research, Product, Customer Engagement, and Solutions teams to ensure model outputs meet mission and platform constraints. Translate advanced AI capabilities into intuitive workflows that platform OEMs and partner nations can use to build sovereign AI factories. Sequence foundational capabilities that unblock autonomy, simulation, and customer-facing product teams.</p>
<ul>
<li>User &amp; Customer Impact</li>
</ul>
<p>Develop deep empathy for ML engineers, autonomy developers, and Solutions engineers who rely on the platform. Capture operational data gaps, mission-driven model needs, and domain-specific specialization requirements. Lead demos and onboarding for model-development capabilities across internal and external teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000 - $290,000 a year</Salaryrange>
      <Skills>AI Model Development &amp; Training Platform, Data, Simulation &amp; Synthetic Data Factory, Safe Deployment &amp; Model Governance, Edge Deployment &amp; AI Factory Integration, Cross-Functional Leadership, User &amp; Customer Impact, Strong engineering background, Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure, Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments, Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows, Experience working on autonomy, robotics, embedded AI, or mission-critical systems, Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures, Experience supporting defense, dual-use, or safety-critical AI systems, Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment), Advanced degree in engineering, ML/AI, robotics, or a related field</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/7886f437-2d5e-4616-8dcb-3dc488f1f585</Applyto>
      <Location>San Diego</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7e3331e3-3f3</externalid>
      <Title>Software Engineer, Research - Human Data</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Research - Human Data</strong></p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. A key part of achieving that mission is training models that deeply understand and reflect human preferences — the <strong>Human Data</strong> team is at the heart of that effort.</p>
<p>The Human Data engineering team creates the systems that enable scalable, high-quality human feedback. These systems are essential to how OpenAI trains and improves its most advanced models. Engineers on this team collaborate closely with world-class researchers to bring alignment techniques to life — from experimental ideas to production-ready feedback loops.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for software engineers to join the Human Data team and build the platforms, prototypes, tools, and infrastructure that power how our AI models are trained, aligned, and evaluated. You’ll partner with researchers and cross-functional teams to bring alignment ideas to life, influence future model training, and shape how models interact with the real world.</p>
<p>We’re looking for people who are excited by technical ownership, enjoy working across the stack, and are eager to solve ambiguous problems in a high-impact, fast-paced environment.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and maintain robust full-stack systems for feedback collection, data labeling, and evaluation pipelines, while maintaining high levels of security.</li>
</ul>
<ul>
<li>Translate experimental alignment research into scalable production infrastructure, including inference and model training stacks.</li>
</ul>
<ul>
<li>Design and iterate on user-facing tools and backend services to support high-quality data workflows</li>
</ul>
<ul>
<li>Partner with researchers, engineers, and program leads to shape feedback loops and model interaction paradigms</li>
</ul>
<ul>
<li>Drive infrastructure improvements that enable faster iteration and scaling across OpenAI’s frontier models, from internal research tooling all the way to production ChatGPT.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have strong software engineering fundamentals and experience building production systems at scale</li>
</ul>
<ul>
<li>Enjoy full-stack development with end-to-end ownership — from backend pipelines to user interfaces</li>
</ul>
<ul>
<li>Are motivated by high-impact collaboration with research teams and solving novel, ambiguous problems</li>
</ul>
<ul>
<li>Are excited to shape how AI systems learn from human preferences and reflect a broad range of human values</li>
</ul>
<ul>
<li>Care deeply about inclusive tooling and building systems that enhance model safety, reliability, and usefulness</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>US$230K – $385K • Offers Equity
London£131K – £245K • Offers Equity</Salaryrange>
      <Skills>software engineering, full-stack development, data labeling, evaluation pipelines, security, inference and model training stacks, user-facing tools, backend services, data workflows, research collaboration, model interaction paradigms, infrastructure improvements, AI systems, human preferences, inclusive tooling, model safety, reliability, usefulness, strong software engineering fundamentals, experience building production systems at scale, full-stack development with end-to-end ownership, high-impact collaboration with research teams, solving novel, ambiguous problems, shaping how AI systems learn from human preferences</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4d6a5951-9838-434c-830a-22cb938ea228</Applyto>
      <Location>San Francisco; London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9f4dd6c6-95f</externalid>
      <Title>AI Deployment Engineer</Title>
      <Description><![CDATA[<p><strong>AI Deployment Engineer</strong></p>
<p>Delhi, India Full time Hybrid</p>
<p><strong>About the Team</strong></p>
<p>The AI Deployment Engineering team is responsible for ensuring the safe and effective deployment of Generative AI applications for developers and enterprises. We act as a trusted advisor and thought partner for our customers, working to build an effective backlog of GenAI use cases for their industry and drive them to production through strong technical guidance.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a driven solutions leader with a product mindset to partner with our customers and ensure they achieve tangible business value with GenAI. You will pair with senior customer leaders to establish a GenAI strategy and identify the highest value applications. You’ll then partner with their technical teams to move from prototype through production. You’ll take a holistic view of their needs and design an enterprise architecture using ChatGPT, OpenAI API, and other services to maximize customer value. You will collaborate closely with Sales, Solutions Engineering, Applied Research, and Product teams, and you will report to the Head of Technical Success, APAC.</p>
<p>This role can be based in Delhi, Mumbai or Bangalore. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Deeply embed with our most strategic platform customers as the technical lead, serving as their technical thought partner to ideate and build novel applications on our API.</li>
</ul>
<ul>
<li>Work with senior customer stakeholders to identify the best applications of GenAI in their industry and to build/qualify a comprehensive backlog to support their AI roadmap.</li>
</ul>
<ul>
<li>Intervene directly to accelerate customer time to value through building hands-on prototypes and/or by delivering impactful strategic guidance.</li>
</ul>
<ul>
<li>Forge and manage relationships with our customers’ leadership and stakeholders to ensure the successful deployment and scale of their applications.</li>
</ul>
<ul>
<li>Contribute to our open-source developer and enterprise resources.</li>
</ul>
<ul>
<li>Scale the Solutions Architect function through sharing knowledge, codifying best practices, and publishing notebooks to our internal and external repositories.</li>
</ul>
<ul>
<li>Validate, synthesize, and deliver high-signal feedback to the Product and Research teams.</li>
</ul>
<p><strong>You’ll thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years of technical consulting (or equivalent) experience, bridging technical teams and senior business stakeholders.</li>
</ul>
<ul>
<li>Are an effective and polished communicator who can translate business and technical topics to all audiences.</li>
</ul>
<ul>
<li>Have led complex implementations of Generative AI/traditional ML solutions and have knowledge of network/cloud architecture.</li>
</ul>
<ul>
<li>Have industry experience in programming languages like Python or Javascript.</li>
</ul>
<ul>
<li>Own problems end-to-end and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Are an effective, high throughput operator who can drive multiple concurrent projects and prioritize ruthlessly.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Generative AI, Traditional ML, Network/cloud architecture, Python, Javascript, Technical consulting, Problem-solving, Communication, Leadership, Open-source developer and enterprise resources, Solutions Architect function, Product and Research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/bf036b23-cd23-46d0-a02f-4b1483f4698a</Applyto>
      <Location>Delhi, India</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ceda79b7-74b</externalid>
      <Title>AI Deployment Engineer</Title>
      <Description><![CDATA[<p><strong>AI Deployment Engineer</strong></p>
<p>We are looking for a driven solutions leader with a product mindset to partner with our customers and ensure they achieve tangible business value with GenAI.</p>
<p><strong>About the Team</strong></p>
<p>The AI Deployment Engineering team is responsible for ensuring the safe and effective deployment of Generative AI applications for developers and enterprises. We act as a trusted advisor and thought partner for our customers, working to build an effective backlog of GenAI use cases for their industry and drive them to production through strong technical guidance.</p>
<p><strong>About the Role</strong></p>
<p>As an AI Deployment Engineer, you’ll help the largest companies transform their business through solutions such as customer service, automated content generation, and novel applications that make use of our newest, most exciting models.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Deeply embed with our most strategic platform customers as the technical lead, serving as their technical thought partner to ideate and build novel applications on our API.</li>
</ul>
<ul>
<li>Work with senior customer stakeholders to identify the best applications of GenAI in their industry and to build/qualify a comprehensive backlog to support their AI roadmap.</li>
</ul>
<ul>
<li>Intervene directly to accelerate customer time to value through building hands-on prototypes and/or by delivering impactful strategic guidance.</li>
</ul>
<ul>
<li>Forge and manage relationships with our customers’ leadership and stakeholders to ensure the successful deployment and scale of their applications.</li>
</ul>
<ul>
<li>Contribute to our open-source developer and enterprise resources.</li>
</ul>
<ul>
<li>Scale the Solutions Architect function through sharing knowledge, codifying best practices, and publishing notebooks to our internal and external repositories.</li>
</ul>
<ul>
<li>Validate, synthesise, and deliver high-signal feedback to the Product and Research teams.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have 6+ years of technical consulting (or equivalent) experience, bridging technical teams and senior business stakeholders.</li>
</ul>
<ul>
<li>Are an effective and polished communicator who can translate business and technical topics to all audiences.</li>
</ul>
<ul>
<li>Have led complex implementations of Generative AI/traditional ML solutions and have knowledge of network/cloud architecture.</li>
</ul>
<ul>
<li>Have industry experience in programming languages like Python or Javascript.</li>
</ul>
<ul>
<li>Own problems end-to-end and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Are an effective, high throughput operator who can drive multiple concurrent projects and prioritise ruthlessly.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>Generative AI, Python, Javascript, Network/cloud architecture, Technical consulting, Effective communication, Open-source developer and enterprise resources, Solutions Architect function, Product and Research teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a leading organisation in the field of AI.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/8acfba11-707e-4d8e-a860-88643fae24ba</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>417e683d-a6d</externalid>
      <Title>Research Communications Manager</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Research Communications Manager</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Communications</p>
<p><strong>Compensation</strong></p>
<p>$185K – $205K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity.</p>
<p>Our Communications team’s ethos is to support OpenAI&#39;s mission and goals by clearly and authentically explaining our technology, values, and approach to safely building powerful AI.</p>
<p><strong>About the Role</strong></p>
<p>OpenAI is seeking an experienced communications professional to join our Platform &amp; Research Communications team. This role will work closely with the Research Communications Lead and in partnership with applied and engineering teams to shape how OpenAI’s scientific work is understood by researchers, journalists, policymakers, and the broader public.</p>
<p>This position is responsible for developing and executing external communications strategies around OpenAI’s research—from foundational model advances to applied science collaborations—ensuring accuracy, nuance, and alignment with OpenAI’s long-term goals. The ideal candidate brings strong science or technical fluency, excellent storytelling instincts, and experience navigating complex, high-stakes narratives.</p>
<p>You will partner closely with research leadership, individual researchers, policy, product, and cross-functional communications teams. This role requires both strategic judgment and hands-on execution in a fast-moving environment where research, product, and public discourse intersect.</p>
<p>This role is based in San Francisco, CA and follows a hybrid schedule (three days per week in office). Relocation assistance is available.</p>
<p><strong>In this role you will:</strong></p>
<p><strong>Shape Research Narratives</strong></p>
<ul>
<li>Develop clear, credible external narratives around OpenAI’s research roadmap, breakthroughs, and long-term scientific direction.</li>
</ul>
<ul>
<li>Translate complex technical work into accessible stories without oversimplifying or overstating impact.</li>
</ul>
<ul>
<li>Help define and reinforce OpenAI’s POV on key research topics (e.g., reasoning, alignment, interpretability, scientific discovery).</li>
</ul>
<p><strong>Lead Research-Focused Media Engagement</strong></p>
<ul>
<li>Build and maintain trusted relationships with top-tier science, technology, and business journalists.</li>
</ul>
<ul>
<li>Manage proactive and reactive media engagement related to research announcements, papers, collaborations, and emerging narratives.</li>
</ul>
<ul>
<li>Prepare researchers and executives for interviews, briefings, and public appearances.</li>
</ul>
<p><strong>Support Research Launches &amp; Publications</strong></p>
<ul>
<li>Partner with research teams to plan communications for major papers, model releases, evaluations, and science initiatives.</li>
</ul>
<ul>
<li>Collaborate with design, editorial, and social teams on blogs, explainers, visuals, and supporting materials.</li>
</ul>
<ul>
<li>Ensure launches are grounded in evidence, aligned with policy and safety considerations, and appropriately scoped.</li>
</ul>
<p><strong>Cross-Functional Partnership</strong></p>
<ul>
<li>Work closely with Product, Policy, Safety, Legal, and Marketing to align research communications with broader company goals.</li>
</ul>
<ul>
<li>Serve as a communications thought partner to researchers—helping them anticipate questions, risks, and opportunities.</li>
</ul>
<p><strong>Risk Anticipation &amp; Mitigation</strong></p>
<ul>
<li>Identify potential reputational, scientific, or misinterpretation risks early.</li>
</ul>
<ul>
<li>Develop mitigation strategies, Q&amp;A, and guidance for sensitive or high-profile research topics.</li>
</ul>
<ul>
<li>Help ensure OpenAI communicates responsibly about frontier capabilities and limitations.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>7+ years of experience in research, science, or technology communications</li>
</ul>
<ul>
<li>Demonstrated experience working directly with scientists, engineers, or technical leaders.</li>
</ul>
<ul>
<li>Strong understanding of AI, machine learning, or adjacent scientific domains</li>
</ul>
<ul>
<li>Exceptional writing and editing skills, with the ability to adapt tone for expert and general audiences</li>
</ul>
<ul>
<li>Proven judgment handling complex, ambiguous, or high-stakes narratives</li>
</ul>
<ul>
<li>Experience managing multiple workstreams in a fast-paced environment with shifting priorities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185K – $205K • Offers Equity</Salaryrange>
      <Skills>research communications, science or technical fluency, storytelling instincts, experience navigating complex, high-stakes narratives, strategic judgment, hands-on execution, fast-moving environment, research, product, and public discourse, media engagement, journalists, research announcements, papers, collaborations, emerging narratives, public appearances, research teams, communications, policy, product, safety, legal, marketing, cross-functional communications, reputational risks, scientific risks, misinterpretation risks, mitigation strategies, Q&amp;A, guidance, responsible communication, frontier capabilities, limitations, writing and editing skills, adaptability, complexity, ambiguity, high-stakes narratives, multiple workstreams, fast-paced environment, shifting priorities, research, science, or technology communications, scientists, engineers, technical leaders, AI, machine learning, adjacent scientific domains</Skills>
      <Category>Communications</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence in a way that benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/73c6e8aa-968a-4f7d-b87f-718aa0b3e82b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>