<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>dc6154f8-cff</externalid>
      <Title>Research Engineer, Pretraining Scaling - London</Title>
      <Description><![CDATA[<p>About Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems.\n\nAbout the Role:\n\nAs a Research Engineer on Anthropic&#39;s ML Performance and Scaling team, you&#39;ll ensure our frontier models train reliably, efficiently, and at scale. This is demanding, high-impact work that requires both deep technical expertise and a genuine passion for the craft of large-scale ML systems.\n\nResponsibilities:\n\n- Own critical aspects of our production pretraining pipeline, including model operations, performance optimization, observability, and reliability\n- Debug and resolve complex issues across the full stack,from hardware errors and networking to training dynamics and evaluation infrastructure\n- Design and run experiments to improve training efficiency, reduce step time, increase uptime, and enhance model performance\n- Respond to on-call incidents during model launches, diagnosing problems quickly and coordinating solutions across teams\n- Build and maintain production logging, monitoring dashboards, and evaluation infrastructure\n- Add new capabilities to the training codebase, such as long context support or novel architectures\n- Collaborate closely with teammates across SF and London, as well as with Tokens, Architectures, and Systems teams\n- Contribute to the team&#39;s institutional knowledge by documenting systems, debugging approaches, and lessons learned\n\nYou May Be a Good Fit If You:\n\n- Have hands-on experience training large language models, or deep expertise with JAX, TPU, PyTorch, or large-scale distributed systems\n- Genuinely enjoy both research and engineering work,you&#39;d describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other\n- Are excited about being on-call for production systems, working long days during launches, and solving hard problems under pressure\n- Thrive when working on whatever is most impactful, even if that changes day-to-day based on what the production model needs\n- Excel at debugging complex, ambiguous problems across multiple layers of the stack\n- Communicate clearly and collaborate effectively, especially when coordinating across time zones or during high-stress incidents\n- Are passionate about the work itself and want to refine your craft as a research engineer\n- Care about the societal impacts of AI and responsible scaling\n\nStrong Candidates May Also Have:\n\n- Previous experience training LLM’s or working extensively with JAX/TPU, PyTorch, or other ML frameworks at scale\n- Contributed to open-source LLM frameworks (e.g., open_lm, llm-foundry, mesh-transformer-jax)\n- Published research on model training, scaling laws, or ML systems\n- Experience with production ML systems, observability tools, or evaluation infrastructure\n- Background as a systems engineer, quant, or in other roles requiring both technical depth and operational excellence\n\nWhat Makes This Role Unique:\n\nThis is not a typical research engineering role. The work is highly operational,you&#39;ll be deeply involved in keeping our production models training smoothly, which means being responsive to incidents, flexible about priorities, and comfortable with uncertainty. During launches, the team often works extended hours and may need to respond to issues on evenings and weekends.\n\nHowever, this operational intensity comes with extraordinary learning opportunities. You&#39;ll gain hands-on experience with some of the largest, most sophisticated training runs in the industry. You&#39;ll work alongside world-class researchers and engineers, and the institutional knowledge you build will compound in ways that can&#39;t be easily transferred. For people who thrive on this type of work, it&#39;s uniquely rewarding.\n\nWe&#39;re building a close-knit team of people who genuinely care about doing excellent work together. If you&#39;re someone who wants to be part of training the models that will define the future of AI,and you&#39;re excited about the full reality of what that entails,we&#39;d love to hear from you.\n\nLocation:\n\nThis role requires working in-office 5 days per week in London.\n\nDeadline to apply:\n\nNone. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:\n\n£260,000-£630,000 GBP\n\nLogistics\n\nMinimum education:\n\nBachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study:\n\nA field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience:\n\nYears of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy:\n\nCurrently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship:\n\nWe do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the h</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>£260,000-£630,000 GBP</Salaryrange>
      <Skills>JAX, TPU, PyTorch, large-scale distributed systems, model operations, performance optimization, observability, reliability, debugging, complex issues, hardware errors, networking, training dynamics, evaluation infrastructure, experiments, training efficiency, step time, uptime, model performance, production logging, monitoring dashboards, codebase, long context support, novel architectures, collaboration, institutional knowledge, documentation, debugging approaches, lessons learned</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4938436008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6960fd5f-0e8</externalid>
      <Title>Research Engineer, Pretraining Scaling</Title>
      <Description><![CDATA[<p><strong>About the Role:\n\nAs a Research Engineer on Anthropic&#39;s ML Performance and Scaling team, you&#39;ll ensure our frontier models train reliably, efficiently, and at scale. This is demanding, high-impact work that requires both deep technical expertise and a genuine passion for the craft of large-scale ML systems.\n\n## Responsibilities:\n\n- Own critical aspects of our production pretraining pipeline, including model operations, performance optimization, observability, and reliability\n- Debug and resolve complex issues across the full stack,from hardware errors and networking to training dynamics and evaluation infrastructure\n- Design and run experiments to improve training efficiency, reduce step time, increase uptime, and enhance model performance\n- Respond to on-call incidents during model launches, diagnosing problems quickly and coordinating solutions across teams\n- Build and maintain production logging, monitoring dashboards, and evaluation infrastructure\n- Add new capabilities to the training codebase, such as long context support or novel architectures\n- Collaborate closely with teammates across SF and London, as well as with Tokens, Architectures, and Systems teams\n- Contribute to the team&#39;s institutional knowledge by documenting systems, debugging approaches, and lessons learned\n\n## You May Be a Good Fit If You:\n\n- Have hands-on experience training large language models, or deep expertise with JAX, TPU, PyTorch, or large-scale distributed systems\n- Genuinely enjoy both research and engineering work,you&#39;d describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other\n- Are excited about being on-call for production systems, working long days during launches, and solving hard problems under pressure\n- Thrive when working on whatever is most impactful, even if that changes day-to-day based on what the production model needs\n- Excel at debugging complex, ambiguous problems across multiple layers of the stack\n- Communicate clearly and collaborate effectively, especially when coordinating across time zones or during high-stress incidents\n- Are passionate about the work itself and want to refine your craft as a research engineer\n- Care about the societal impacts of AI and responsible scaling\n\n## Strong Candidates May Also Have:\n\n- Previous experience training LLM’s or working extensively with JAX/TPU, PyTorch, or other ML frameworks at scale\n- Contributed to open-source LLM frameworks (e.g., open_lm, llm-foundry, mesh-transformer-jax)\n- Published research on model training, scaling laws, or ML systems\n- Experience with production ML systems, observability tools, or evaluation infrastructure\n- Background as a systems engineer, quant, or in other roles requiring both technical depth and operational excellence\n\n## What Makes This Role Unique:\n\nThis is not a typical research engineering role. The work is highly operational,you&#39;ll be deeply involved in keeping our production models training smoothly, which means being responsive to incidents, flexible about priorities, and comfortable with uncertainty. During launches, the team often works extended hours and may need to respond to issues on evenings and weekends.\n\nHowever, this operational intensity comes with extraordinary learning opportunities. You&#39;ll gain hands-on experience with some of the largest, most sophisticated training runs in the industry. You&#39;ll work alongside world-class researchers and engineers, and the institutional knowledge you build will compound in ways that can&#39;t be easily transferred. For people who thrive on this type of work, it&#39;s uniquely rewarding.\n\nWe&#39;re building a close-knit team of people who genuinely care about doing excellent work together. If you&#39;re someone who wants to be part of training the models that will define the future of AI,and you&#39;re excited about the full reality of what that entails,we&#39;d love to hear from you.\n\nLocation: This role requires working in-office 5 days per week in San Francisco.\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $350,000-$850,000 USD\n\n## Logistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\n## How we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>JAX, TPU, PyTorch, large-scale distributed systems, model operations, performance optimization, observability, reliability, debugging, complex issues, hardware errors, networking, training dynamics, evaluation infrastructure, experiments, training efficiency, step time, uptime, model performance, production logging, monitoring dashboards, new capabilities, long context support, novel architectures, collaboration, institutional knowledge, documentation, debugging approaches, lessons learned</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that focuses on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4938432008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac0b2f4-6c9</externalid>
      <Title>Member of Technical Staff - Imagine Product</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>
<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>
<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>
<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>
<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>
<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>
<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>
<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>
<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>
<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>
<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>
<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>
<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>
<li>Background in AI-driven consumer products or media generation technologies.</li>
<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://xAI.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052027007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d5ea8d9-d71</externalid>
      <Title>Technical Program Manager, Quality Engineering</Title>
      <Description><![CDATA[<p>As Technical Program Manager for Quality Engineering, you will be responsible for ensuring the quality of Hivemind Enterprise, a software development kit that enables both Shield AI and third parties to create a new generation of unmanned systems and mission applications.</p>
<p>You will work with sponsors, product managers, designers, and a talented cross-functional engineering team to lead on-time, on-quality software releases. This Technical Program Manager role requires a versatile individual who can plan and manage complex software test automation projects and flight test operations.</p>
<p>Responsibilities:
Drive the planning, execution, and delivery of complex software releases for the Hivemind Enterprise platform.
Drive major initiatives building out rigorous simulation-based test automation for autonomy software.
Drive large complex test operations from CI through HIL and flight test.
Collaborate with cross-functional teams, including engineering, product management, and customer engagement, to define test scope, objectives, and deliverables.
Develop detailed project plans, including timelines, milestones, and resource allocation, ensuring adherence to budget and schedule.
Monitor project progress, identify risks, and implement mitigation strategies to ensure successful project completion.
Facilitate regular communication with stakeholders, providing updates on project status, challenges, and accomplishments.
Drive continuous improvement in project management processes, tools, and methodologies.
Foster a culture of innovation, collaboration, and excellence within the program management team.
Facilitate technical decision making to appropriately prioritize work and make complex trades.
Reinforce Shield AI&#39;s reputation for technical excellence and ability to deliver through expert engagement and disciplined execution.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$170,000 - $260,000 a year</Salaryrange>
      <Skills>B.S. in Computer Science, or a related field, 8+ years of work experience in Program Management, Engineering Management or Software Test Engineering, Experience driving testing of complex software and integrated HW/SW systems, Experience driving software test operations for software high-reliability and high uptime, Power user of Jira or work management tools, Power user of TestRail, qTest or other test management frameworks, Advanced degree in Computer Science or MBA, Previous experience as a software developer or a test engineer, Previous experience driving test operations for enterprise software, ideally for software used by software engineers, Previous experience managing flight test operations or autonomous systems testing, Passionate about software documentation, Autonomy, AI, ML expertise, PMP, Scrum Master, or similar qualifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for protecting service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/43aba4f5-3e19-4e97-bc57-d19849ccd8ab</Applyto>
      <Location>San Diego</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b447a8bc-5f1</externalid>
      <Title>Backend Software Engineer - B2B Connectors</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco; New York City</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>OpenAI’s mission is to make AGI beneficial for all of humanity and our mission is successful only if AGI drives real benefits across all industries in the world. Our goal in B2B applications is to enable this mission by helping businesses, enterprises &amp; governments redefine how they operate to empower people and accelerate economic growth.</p>
<p>Connectors are the bridge between OpenAI products (ChatGPT Enterprise, Frontier, and the API) and the systems where work actually happens—documents, tickets, messages, CRM records, knowledge bases, and more. The Connectors Platform team builds the infrastructure and control plane that makes these integrations reliable, secure, scalable, and enterprise-ready across a wide range of partners and customer environments.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We’re looking for an infrastructure-focused engineer to build and operate the systems that make Connectors dependable at global scale. In this role, you’ll design the control plane, reliability foundations, and operational tooling that power connector execution—auth flows, sync and indexing pipelines, rate limiting, isolation, observability, incident response, and safe rollouts. You’ll work closely with product engineering, partner teams, and security to ship enterprise-grade connectivity while meeting high bars for privacy, compliance, and uptime.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Design and operate the infrastructure that powers connector sync, indexing, and retrieval at scale (job orchestration, queues, storage, caching, backpressure).</li>
</ul>
<ul>
<li>Build the “control plane” primitives for connectors: rollout controls, configuration management, permissions, policy enforcement, and kill switches.</li>
</ul>
<ul>
<li>Own reliability and operational excellence: SLOs, monitoring/alerting, incident response, postmortems, on-call health, and capacity planning.</li>
</ul>
<ul>
<li>Create guardrails for safe multi-tenant execution: isolation boundaries, secrets handling, rate limits, abuse prevention, and blast-radius reduction.</li>
</ul>
<ul>
<li>Partner with security and compliance teams to ensure enterprise requirements are met (audibility, least privilege, data retention, and secure-by-default architecture).</li>
</ul>
<ul>
<li>Improve developer velocity via internal tooling: local dev workflows, canary environments, load testing, and observability dashboards.</li>
</ul>
<p><strong><strong>Your background might look something like:</strong></strong></p>
<ul>
<li>5+ years of professional engineering experience (excluding internships) in infra / SRE / platform roles at tech and product-driven companies.</li>
</ul>
<ul>
<li>Strong distributed systems fundamentals and production instincts (availability, latency, correctness, resilience).</li>
</ul>
<ul>
<li>Experience building and operating services with meaningful uptime and scale requirements (multi-region is a plus).</li>
</ul>
<ul>
<li>Proficient in one or more backend languages (e.g. Python, Rust) and comfortable working close to systems concerns (networking, storage, queueing).</li>
</ul>
<ul>
<li>Deep familiarity with observability (metrics, logs, tracing), incident management, and reliability engineering practices.</li>
</ul>
<ul>
<li>Comfortable navigating ambiguous problem spaces and pushing pragmatic solutions into production.</li>
</ul>
<p>Interest in AI/ML is a plus, but not required.</p>
<p><strong>About OpenAI</strong> OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>Backend languages (e.g. Python, Rust), Distributed systems fundamentals, Production instincts (availability, latency, correctness, resilience), Experience building and operating services with meaningful uptime and scale requirements, Proficient in one or more backend languages and comfortable working close to systems concerns (networking, storage, queueing), Deep familiarity with observability (metrics, logs, tracing), incident management, and reliability engineering practices, AI/ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company is focused on developing and deploying AI systems that are safe and beneficial to society.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/cbacb6bd-aa41-41af-a5d5-13515a1be72b</Applyto>
      <Location>San Francisco; New York City</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>