<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>26b9d76f-c85</externalid>
      <Title>Research Engineer, Universes</Title>
      <Description><![CDATA[<p>We&#39;re looking for Research Engineers to help us build the next generation of training environments for capable and safe agentic AI.</p>
<p>This role blends research and engineering responsibilities, requiring you to both implement novel approaches and contribute to research direction. You&#39;ll work on fundamental research in reinforcement learning, designing training environments and methodologies that push the state of the art, and building evaluations that measure genuine capability.</p>
<p>Responsibilities:</p>
<ul>
<li>Build the next generation of agentic environments</li>
<li>Build rigorous evaluations that measure real capability</li>
<li>Collaborate across research and infrastructure teams to ship environments into production training</li>
<li>Debug and iterate rapidly across research and production ML stacks</li>
<li>Contribute to research culture through technical discussions and collaborative problem-solving</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Are highly impact-driven , you care about outcomes, not activity</li>
<li>Operate with high agency</li>
<li>Have good research taste or senior technical experience, demonstrating good judgment in identifying what actually matters in complex problem spaces</li>
<li>Can balance research exploration with engineering implementation</li>
<li>Are passionate about the potential impact of AI and are committed to developing safe and beneficial systems</li>
<li>Are comfortable with uncertainty and adapt quickly as the landscape shifts</li>
<li>Have strong software engineering skills and can build robust infrastructure</li>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<p>Strong candidates may also have one or more of the following:</p>
<ul>
<li>Have industry experience with large language model training, fine-tuning or evaluation</li>
<li>Have industry experience building RL environments, simulation systems, or large-scale ML infrastructure</li>
<li>Senior experience in a relevant technical field even if transitioning domains</li>
<li>Deep expertise in sandboxing, containerization, VM infrastructure, or distributed systems</li>
<li>Published influential work in relevant ML areas</li>
</ul>
<p>The annual compensation range for this role is $500,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$500,000-$850,000 USD</Salaryrange>
      <Skills>Reinforcement learning, Training environments, ML stacks, Software engineering, Pair programming, Large language model training, RL environments, Simulation systems, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5061517008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da726093-b19</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Research Engineer on our team, you will work end to end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on. This may include performance optimization, distributed systems, VM/sandboxing/container deployment, and large scale data pipelines.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI.</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration.</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale projects, and we&#39;re committed to making a positive impact on the world.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $850,000 USD</Salaryrange>
      <Skills>infrastructure engineering, large-scale distributed systems, performance optimization, containerization technologies, orchestration at scale, data pipelines, distributed storage systems, complex infrastructure challenges, ML stack, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines, language model training infrastructure, distributed ML frameworks, GPU/TPU architectures, language model inference optimization, cloud platforms, VM and container orchestration, workflow orchestration tools, experiment management systems, large scale reinforcement learning, large scale data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>32b83135-974</externalid>
      <Title>Software Engineer, Data Infrastructure - Research</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Data Infrastructure - Research</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$250K – $380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Workload team is responsible for designing and running OpenAI’s LLM training and inference infrastructure that powers frontier models at massive scale. Our systems unify how researchers train and serve models, abstracting away the complexity of performance, parallelism, and execution across vast GPU/accelerator fleets. By providing this foundation, the Workload team ensures that researchers can focus on advancing model capabilities while we handle the scale, efficiency, and reliability required to bring those models to life.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We are looking for an engineer to design and implement the dataset infrastructure that powers OpenAI’s next-generation training stack. You will be responsible for building standardized dataset interfaces, scaling pipelines across thousands of GPUs, and proactively testing performance bottlenecks. In this role, you will collaborate closely with the multimodal researchers, and other infra groups to ensure datasets are unified, efficient, and easy to consume.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Design and maintain standardized dataset APIs, including for multimodal (MM) data that cannot fit in memory.</li>
</ul>
<ul>
<li>Build proactive testing and scale validation pipelines for dataset loading at GPU scale.</li>
</ul>
<ul>
<li>Collaborate with teammates to integrate datasets seamlessly into training and inference pipelines, ensuring smooth adoption and a great user experience.</li>
</ul>
<ul>
<li>Document and maintain dataset interfaces so they are discoverable, consistent, and easy for other teams to adopt.</li>
</ul>
<ul>
<li>Establish safeguards and validation systems to ensure datasets remain reproducible and unchanged once standardized.</li>
</ul>
<ul>
<li>Debug and resolve performance bottlenecks in distributed dataset loading (e.g., straggler systems slowing global training).</li>
</ul>
<ul>
<li>Provide visualization and inspection tools to surface errors, bugs, or bottlenecks in datasets.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Have strong engineering fundamentals with experience in distributed systems, data pipelines, or infrastructure.</li>
</ul>
<ul>
<li>Have experience building APIs, modular code, and scalable abstractions, while recognizing that abstractions ultimately serve the users and UX is an important part of the abstractions design.</li>
</ul>
<ul>
<li>Are comfortable debugging bottlenecks across large fleets of machines.</li>
</ul>
<ul>
<li>Take pride in building infrastructure that “just works,” and find joy in being the guardian of reliability and scale.</li>
</ul>
<ul>
<li>Are collaborative, humble, and excited to own a foundational (if not glamorous) part of the ML stack.</li>
</ul>
<p><strong>Bonus points if you:</strong></p>
<ul>
<li>Have background knowledge in data math, probability, or distributed data theory.</li>
</ul>
<ul>
<li>Have worked with GPU-scale distributed systems or dataset scaling for real-time data</li>
</ul>
<p><strong><strong>About OpenAI</strong></strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$250K – $380K • Offers Equity</Salaryrange>
      <Skills>distributed systems, data pipelines, infrastructure, APIs, modular code, scalable abstractions, data math, probability, distributed data theory, GPU-scale distributed systems, dataset scaling, collaborative, humble, excited to own a foundational part of the ML stack</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company was founded in 2015 and has since grown to become a leading player in the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/b7a2e30f-c5f6-4710-b53e-64d64bcce189</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>