<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>762f29e0-3f9</externalid>
      <Title>Anthropic Fellows Program</Title>
      <Description><![CDATA[<p>The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.</p>
<p>Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission).</p>
<p>We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.</p>
<p><strong>What to Expect</strong></p>
<ul>
<li>4 months of full-time research</li>
<li>Direct mentorship from Anthropic researchers</li>
<li>Access to a shared workspace (in either Berkeley, California or London, UK)</li>
<li>Connection to the broader AI safety and security research community</li>
<li>Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD + benefits (these vary by country)</li>
<li>Funding for compute (~$15k/month) and other research expenses</li>
</ul>
<p><strong>Interview Process</strong></p>
<p>The interview process will include an initial application &amp; reference check, technical assessments &amp; interviews, and a research discussion.</p>
<p><strong>Compensation</strong></p>
<p>The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).</p>
<p><strong>Fellows Workstreams</strong></p>
<p>Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.</p>
<p>Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:</p>
<ul>
<li>AI Safety Fellows</li>
<li>AI Security Fellows</li>
<li>ML Systems &amp; Performance Fellows</li>
<li>Reinforcement Learning Fellows</li>
<li>Economics &amp; Societal Impacts Fellows</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python programming, Computer science, Mathematics, Physics, Empirical AI research, Economics, Social sciences, Cybersecurity, Open-source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5023394008</Applyto>
      <Location>London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>