<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>ac45e205-e7d</externalid>
      <Title>Engineering Manager, Inference Routing and Performance</Title>
      <Description><![CDATA[<p><strong>About the role\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\n\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\n\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\n\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\n\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\n\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\n\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\n\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\n\nThe job is holding both.\n\n## Representative work:\nThings the Inference Routing EM actually spends time on:\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\n\n## What you&#39;ll do:\nDrive system-level performance\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\n\nDeliver reliably and operate cleanly\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\n\nBuild and grow the team\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\n\n## You may be a good fit if you:\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\n\nStrong candidates may also have:\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\n- Led teams at supercomputing or hyperscaler infrastructure scale\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\n\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\nAnnual Salary: $405,000-$485,000 USD</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>engineering management, distributed systems, load balancing, scheduling, cache-coherent distributed state, high-performance networking, machine learning systems, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155391008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e9e3cff7-d9b</externalid>
      <Title>Performance Engineer</Title>
      <Description><![CDATA[<p>As a Performance Engineer at Anthropic, you will be responsible for identifying and solving novel systems problems that arise when running machine learning algorithms at scale. Your expertise will be crucial in developing systems that optimize the throughput and robustness of our largest distributed systems.</p>
<p>You will work closely with our team of researchers, engineers, and policy experts to build beneficial AI systems. Your contributions will have a direct impact on the development of our AI technology and its applications.</p>
<p>We are looking for a highly motivated and experienced engineer who is passionate about solving complex systems problems and has a strong background in software engineering or machine learning. If you are excited about the opportunity to work on cutting-edge AI technology and make a meaningful contribution to the field, we encourage you to apply.</p>
<p>Responsibilities:</p>
<ul>
<li>Identify and solve novel systems problems that arise when running machine learning algorithms at scale</li>
<li>Develop systems that optimize the throughput and robustness of our largest distributed systems</li>
<li>Collaborate with our team of researchers, engineers, and policy experts to build beneficial AI systems</li>
<li>Contribute to the development of our AI technology and its applications</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering or machine learning experience, particularly at supercomputing scale</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Enjoy pair programming</li>
<li>Want to learn more about machine learning research</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Experience with high-performance, large-scale ML systems</li>
<li>GPU/Accelerator programming</li>
<li>ML framework internals</li>
<li>OS internals</li>
<li>Language modeling with transformers</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$280,000-$850,000 USD</Salaryrange>
      <Skills>software engineering, machine learning, high-performance computing, GPU/Accelerator programming, ML framework internals, OS internals, language modeling with transformers, pair programming, results-oriented, flexibility and impact, ability to pick up slack, enjoy learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4020350008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>63af8568-789</externalid>
      <Title>Engineering Manager, Inference Routing and Performance</Title>
      <Description><![CDATA[<p><strong>About the role\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\n\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\n\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\n\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\n\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\n\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\n\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\n\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\n\nThe job is holding both.\n\n## Representative work:\nThings the Inference Routing EM actually spends time on:\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\n\n## What you&#39;ll do:\nDrive system-level performance\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\n\nDeliver reliably and operate cleanly\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\n\nBuild and grow the team\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\n\n## You may be a good fit if you:\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\n\nStrong candidates may also have:\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\n- Led teams at supercomputing or hyperscaler infrastructure scale\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\n\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\nAnnual Salary: $405,000-$485,000 USD</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>engineering management, deep systems background, load balancing, scheduling, cache-coherent distributed state, high-performance networking, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155391008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1819a743-ca5</externalid>
      <Title>Engineering Manager, GPU (ML Accelerator)</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>As an Engineering Manager on Anthropic&#39;s performance and scaling teams, you will be responsible for ensuring your team identifies and removes bottlenecks, builds robust and durable solutions, and maximizes the efficiency of our systems.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide front-line leadership of engineering efforts to improve model performance and scale our inference and training systems</li>
<li>Become familiar with the team&#39;s technical stack enough to make targeted contributions as an individual contributor</li>
<li>Manage day-to-day execution of the team&#39;s work</li>
<li>Prioritize the team&#39;s work and manage projects in a highly dynamic, fast-paced environment</li>
<li>Coach and support your reports in understanding, and pursuing, their professional growth</li>
<li>Maintain a deep understanding of the team&#39;s technical work and its implications for AI safety</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 1+ years of management experience in a technical environment, particularly performance or distributed systems</li>
<li>Have a background in machine learning, AI, or a similar related technical field</li>
<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>
<li>Excel at building strong relationships with stakeholders at all levels</li>
<li>Are a quick learner, capable of understanding and contributing to discussions on complex technical topics</li>
<li>Have experience managing teams through periods of rapid growth and change</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale ML systems</li>
<li>GPU/Accelerator programming</li>
<li>ML framework internals</li>
<li>OS internals</li>
<li>Language modeling with transformers</li>
</ul>
<p>The annual compensation range for this role is $500,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$500,000-$850,000 USD</Salaryrange>
      <Skills>Machine Learning, AI, Performance or Distributed Systems, GPU/Accelerator Programming, ML Framework Internals, OS Internals, Language Modeling with Transformers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4741104008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b50d0ec9-1d8</externalid>
      <Title>Engineering Manager, ML Acceleration</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the role:</strong></p>
<p>Anthropic&#39;s performance and scaling teams focus on making the most efficient and impactful use of our compute resources, be it inference or training. As an Engineering Manager on these teams you will be responsible for ensuring you and your team are identifying and removing bottlenecks, building robust and durable solutions, and maximizing the efficiency of our systems. You also will help bring clarity, focus, and context to your teams in a fast paced, dynamic environment.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Provide front-line leadership of engineering efforts to improve model performance and scale our inference and training systems</li>
<li>Become familiar with the team’s technical stack enough to make targeted contributions as an individual contributor</li>
<li>Manage day-to-day execution of the team&#39;s work</li>
<li>Prioritize the team’s work and manage projects in a highly dynamic, fast paced environment</li>
<li>Coach and support your reports in understanding, and pursuing, their professional growth</li>
<li>Maintain a deep understanding of the team&#39;s technical work and its implications for AI safety</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 1+ years of management experience in a technical environment, particularly performance or distributed systems</li>
<li>Have a background in machine learning, AI, or a similar related technical field</li>
<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>
<li>Excel at building strong relationships with stakeholders at all levels</li>
<li>Are a quick learner, capable of understanding and contributing to discussions on complex technical topics</li>
<li>Have experience managing teams through periods of rapid growth and change</li>
<li>Are a quick study: this team sits at the intersection of a large number of different complex technical systems that you’ll need to understand (at a high level of abstraction) to be effective</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>High performance, large-scale ML systems</li>
<li>GPU/Accelerator programming</li>
<li>ML framework internals</li>
<li>OS internals</li>
<li>Language modeling with transformers</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong></p>
<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and paid time off, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$500,000 - $850,000 USD</Salaryrange>
      <Skills>Machine Learning, AI, Distributed Systems, High Performance Computing, GPU/Accelerator Programming, ML Framework Internals, OS Internals, Language Modeling with Transformers, High Performance, Large-Scale ML Systems, GPU/Accelerator Programming, ML Framework Internals, OS Internals, Language Modeling with Transformers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4741104008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>20d39f2a-da8</externalid>
      <Title>TPU Kernel Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a TPU Kernel Engineer, you&#39;ll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimising kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance.</p>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have significant experience optimising ML systems for TPUs, GPUs, or other accelerators</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Enjoy pair programming (we love to pair!)</li>
<li>Want to learn more about machine learning research</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>High performance, large-scale ML systems</li>
<li>Designing and implementing kernels for TPUs or other ML accelerators</li>
<li>Understanding accelerators at a deep level, e.g. a background in computer architecture</li>
<li>ML framework internals</li>
<li>Language modeling with transformers</li>
</ul>
<p><strong>Representative projects:</strong></p>
<ul>
<li>Implement low-latency, high-throughput sampling for large language models</li>
<li>Adapt existing models for low-precision inference</li>
<li>Build quantitative models of system performance</li>
<li>Design and implement custom collective communication algorithms</li>
<li>Debug kernel performance at the assembly level</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p><strong>Guidance on Candidates&#39; AI Usage:</strong></p>
<p>Learn about our policy for using AI in our application process</p>
<p><strong>Apply for this job</strong></p>
<ul>
<li>indicates a required field</li>
</ul>
<p>First Name<em> Last Name</em> Email<em> Country</em> Phone* 244 results found No results found</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$280,000 - $850,000USD</Salaryrange>
      <Skills>TPU, GPU, ML systems, kernel design, optimisation, pair programming, machine learning research, societal impacts, high performance, large-scale ML systems, computer architecture, ML framework internals, language modeling with transformers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4720576008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>797f344d-f9f</externalid>
      <Title>Performance Engineer</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>Running machine learning (ML) algorithms at our scale often requires solving novel systems problems. As a Performance Engineer, you&#39;ll be responsible for identifying these problems, and then developing systems that optimize the throughput and robustness of our largest distributed systems. Strong candidates here will have a track record of solving large-scale systems problems and will be excited to grow to become an expert in ML also.</p>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have significant software engineering or machine learning experience, particularly at supercomputing scale</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Enjoy pair programming (we love to pair!)</li>
<li>Want to learn more about machine learning research</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>High performance, large-scale ML systems</li>
<li>GPU/Accelerator programming</li>
<li>ML framework internals</li>
<li>OS internals</li>
<li>Language modeling with transformers</li>
</ul>
<p><strong>Representative projects:</strong></p>
<ul>
<li>Implement low-latency high-throughput sampling for large language models</li>
<li>Implement GPU kernels to adapt our models to low-precision inference</li>
<li>Write a custom load-balancing algorithm to optimize serving efficiency</li>
<li>Build quantitative models of system performance</li>
<li>Design and implement a fault-tolerant distributed system running with a complex network topology</li>
<li>Debug kernel-level network latency spikes in a containerized environment</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p><strong>Guidance on Candidates&#39; AI Usage:</strong></p>
<p>Learn about our policy for using AI in our application process</p>
<p><strong>Apply for this job</strong></p>
<ul>
<li>indicates a required field</li>
</ul>
<p>First Name<em> Last Name</em> Email<em> Country</em> Phone* 244 results found No results found</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$280,000 - $850,000USD</Salaryrange>
      <Skills>software engineering, machine learning, GPU/Accelerator programming, ML framework internals, OS internals, language modeling with transformers, high performance, large-scale ML systems, fault-tolerant distributed systems, complex network topology, quantitative models of system performance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. The company is headquartered in San Francisco and has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4020350008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>5897facf-31b</externalid>
      <Title>Engineering Manager, Inference</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>Anthropic&#39;s performance and scaling teams focus on making the most efficient and impactful use of our compute resources, be it inference or training. As an Engineering Manager on these teams, you will be responsible for ensuring you and your team are identifying and removing bottlenecks, building robust and durable solutions, and maximizing the efficiency of our systems. You also will help bring clarity, focus, and context to your teams in a fast-paced, dynamic environment.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Provide front-line leadership of engineering efforts to improve model performance and scale our inference and training systems</li>
<li>Become familiar with the team&#39;s technical stack enough to make targeted contributions as an individual contributor</li>
<li>Manage day-to-day execution of the team&#39;s work</li>
<li>Prioritize the team&#39;s work and manage projects in a highly dynamic, fast-paced environment</li>
<li>Coach and support your reports in understanding, and pursuing, their professional growth</li>
<li>Maintain a deep understanding of the team&#39;s technical work and its implications for AI safety</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 1+ years of management experience in a technical environment, particularly performance or distributed systems</li>
<li>Have a background in machine learning, AI, or a similar related technical field</li>
<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>
<li>Excel at building strong relationships with stakeholders at all levels</li>
<li>Are a quick learner, capable of understanding and contributing to discussions on complex technical topics</li>
<li>Have experience managing teams through periods of rapid growth and change</li>
<li>Are a quick study: this team sits at the intersection of a large number of different complex technical systems that you&#39;ll need to understand (at a high level of abstraction) to be effective</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>High performance, large-scale ML systems</li>
<li>GPU/Accelerator programming</li>
<li>ML framework internals</li>
<li>OS internals</li>
<li>Language modeling with transformers</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic, we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave policies, and a dynamic and inclusive work environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$425,000 - $560,000USD</Salaryrange>
      <Skills>machine learning, AI, performance systems, distributed systems, high performance, large-scale ML systems, GPU/Accelerator programming, ML framework internals, OS internals, language modeling with transformers, high performance, large-scale ML systems, GPU/Accelerator programming, ML framework internals, OS internals, language modeling with transformers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. The company has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4741102008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>