<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1bebb6dc-380</externalid>
      <Title>Staff Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>
<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>
<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>
<p>Impact and Responsibilities:</p>
<ul>
<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>
</ul>
<ul>
<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>
</ul>
<ul>
<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>
</ul>
<ul>
<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>
</ul>
<ul>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
</ul>
<ul>
<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>
</ul>
<ul>
<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>
</ul>
<ul>
<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>
</ul>
<ul>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>
</ul>
<ul>
<li>Experience scaling products at hyper-growth startups.</li>
</ul>
<ul>
<li>Excitement to work with AI technologies.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Software development, Distributed systems, Public cloud platforms, Containerization &amp; deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649893005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1869fa15-51d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>
<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>
<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>
<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, containerization &amp; deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4594879005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1fa6d45d-1b7</externalid>
      <Title>Senior Software Engineer, United Kingdom</Title>
      <Description><![CDATA[<p>We are hiring Software Engineers to accelerate our mission. At KoBold, software engineers have the unique opportunity to embed directly with their users and learn the ins and outs of mineral exploration and geology while developing state-of-the-art technology solutions.\n\nUnlike traditional software engineering roles, we don&#39;t simply ship code and passively wait for feedback about its utility: our userbase includes our colleagues... and ourselves!\n\nWhile there are real technical challenges in making mineral exploration data broadly searchable and accessible to both humans and machines, we believe that solving these technical challenges cannot be done without &quot;getting our hands dirty&quot; – sometimes literally! – by embedding directly with the exploration teams and even occasionally (~once a year) joining our colleagues in the field, be it in Zambia, Canada, or Arizona, to experience the impact of our software in real time.\n\nAs a Software Engineer on the Data Systems Engineering team at KoBold, your main role will be to enable systematic exploration and materially improve exploration success rates by making mineral exploration data broadly accessible to humans and machines.\n\nPast projects have included SIP (the Structured Ingest Pipeline), DataKit generation (producing curated sets of data on demand), and RAG (Retrieval Augmentation Generation, utilizing natural language processing on unstructured data).\n\nOur tech stack is primarily python and includes Django, React, AWS, and additional technologies like Retool and Prefect.\n\nYour work will empower KoBold to unlock invaluable insights and streamline intricate scientific processes.\n\nCollaborating with our exceptional team of data scientists, geologists, and other software engineers, you will have the opportunity to tackle complex problems head-on and collectively pave the way for the discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt.\n\nTogether we can shape the future of mineral exploration and contribute to building a sustainable world.\n\nThis role will be responsible for:\n\nDeep engagement with exploration geologists and data scientists, continual learning about mineral exploration, and tailoring technology development to the needs of exploration project scientists\n\nBuilding data pipelines and tooling for deriving advanced human and machine insights from exploration data, often leading a small group of software engineers to successful delivery\n\nDeveloping expertise in KoBold&#39;s Data Systems and deeply understanding how they impact exploration\n\nEnd-to-end ownership of projects from design to implementation and testing to continued engagement with colleagues on exploration teams using your solutions\n\nResponding well to design and code feedback, also providing feedback to teammates\n\nOperationally managing the team&#39;s services and assisting scientific colleagues with our tooling\n\nQualifications:\n\n4+ years of software engineering experience, ideally building production cloud data systems\n\nProficiency with Python\n\nAbility to write production-quality code that is correct, readable, well-tested, scalable and extensible\n\nSkilled in large-scale system design\n\nA track record of taking ownership from definition of the problem and delivering projects with demonstrated impact in an iterative manner\n\nIntellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.\n\nEnjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.\n\nAbility to explain technical problems to and collaborate on solutions with domain experts who are not software developers.\n\nA strong communicator who enjoys working with colleagues across the company.\n\nExcitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.\n\nKeen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.\n\nNice to Haves:\n\nExperience with modern frontend frameworks such as React\n\nExperience with geospatial data and building map-based experiences\n\nFamiliarity with containerization and container orchestration platforms, such as Docker, AWS ECS, Kubernetes, etc.\n\nFormal education or job exposure to natural sciences</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $210,000 USD</Salaryrange>
      <Skills>Python, Django, React, AWS, Retool, Prefect, Geospatial data, Containerization, Container orchestration, Modern frontend frameworks, Geospatial data and map-based experiences, Containerization and container orchestration platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold</Employername>
      <Employerlogo>https://logos.yubhub.co/kobold.com.png</Employerlogo>
      <Employerdescription>KoBold is a privately held mineral exploration company and technology developer, with a portfolio of over 60 projects and a team of data scientists, software engineers, and exploration geologists.</Employerdescription>
      <Employerwebsite>https://www.kobold.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4678367005</Applyto>
      <Location>Remote, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>946d6893-cbb</externalid>
      <Title>Infrastructure Security Engineer (USA)</Title>
      <Description><![CDATA[<p>As a member of the Infrastructure Security Team within the Product Security Department, you will work with teams across GitLab to ensure that the components that comprise our cloud infrastructure are built with the resiliency and security expectations that our customers depend on to power their software factories.</p>
<p>We’re looking for an Intermediate Infrastructure Security Engineer to further our automation efforts in support of our GitLab Dedicated for Government product offering. You’ll have the opportunity to contribute to tooling that operates our FedRAMP environment, identify and develop remediations for infrastructure vulnerabilities, and partner with more senior engineers to review upcoming project architectures to ensure that they are built to the rigorous standards we hold.</p>
<p>Support the Public Sector SRE team as a stable counterpart, identify and help mitigate security issues, misconfigurations, and vulnerabilities related to GitLab’s cloud, container and Kubernetes infrastructure, build tooling to increase our visibility into environments to expedite vulnerability detection, own efforts securing GitLab&#39;s FedRAMP environment, support other security teams as an Infrastructure SME, document best practices and remediations to help engineers learn from common vulnerability types, partner with senior engineers to review new architectures and projects and provide feedback cross-functionally, fulfill the Product Security Division Mission of securing GitLab Infrastructure with our own product (“dogfooding”).</p>
<p>To be successful in this role, you will need to have hands-on experience with public cloud providers (ex. AWS, GCP, Azure), development experience with Ruby, Python, Go, experience with Infrastructure-as-Code (IaC) tools (ex. Terraform, Ansible, Chef), knowledge of the Linux operating system, familiarity with containers (Docker) and orchestration platforms (Kubernetes), an interest in Information Security, demonstrated experience working collaboratively with cross-functional teams, proficiency to communicate over a text-based medium (Slack, GitLab Issues, Email) and can succinctly document technical details, share our values, and work in accordance with those values.</p>
<p>Due to government requirements, you must be a United States Citizen (defined as any individual who is a citizen of the United States by law, birth, or naturalization) to fill this position.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$103,600-$185,000 USD</Salaryrange>
      <Skills>public cloud providers, Ruby, Python, Go, Infrastructure-as-Code (IaC) tools, Linux operating system, containers (Docker), orchestration platforms (Kubernetes), Information Security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8459132002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7f94351-85f</externalid>
      <Title>Principal Backend Software Engineer - Scaling</Title>
      <Description><![CDATA[<p>We are seeking an experienced Principal Backend Software Engineer to join our Scaling team. In this pivotal role, you will lead the development of our product&#39;s core components, ensuring they interact seamlessly with other services and systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architect and develop pipelines to ingest and analyse data from network devices and other sources.</li>
<li>Enhance scalability and performance by optimising computational processes and implementing solutions that scale with growing data and user demands.</li>
<li>Innovate on product features by developing new features leveraging our network model to provide actionable insights.</li>
<li>Collaborate with product teams to translate user needs into technical solutions.</li>
<li>Technical leadership involves mentoring and guiding junior engineers, fostering a culture of excellence.</li>
<li>Cross-functional collaboration is essential to ensure cohesive integration of services.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related field, with a Master&#39;s or Ph.D. preferred.</li>
<li>8+ years of full lifecycle software development experience.</li>
<li>Proven experience in backend development using Java, C++, or similar languages.</li>
<li>Strong background in object-oriented design and development.</li>
</ul>
<p>Technical skills include proficiency with databases, algorithms, and design for performance and scalability, as well as in-depth knowledge of software architecture, design patterns, and best practices.</p>
<p>Soft skills include excellent problem-solving abilities, strong communication skills, and the ability to work collaboratively in a fast-paced environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$240,000 - $270,000</Salaryrange>
      <Skills>Java, C++, object-oriented design, database management, algorithm design, performance optimisation, scalability, software architecture, design patterns, best practices, containerization tools, orchestration platforms, big data technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Forward Networks</Employername>
      <Employerlogo>https://logos.yubhub.co/forwardnetworks.com.png</Employerlogo>
      <Employerdescription>Forward Networks is a technology company founded in 2013 by four Stanford Ph.D.s, specialising in network management and security.</Employerdescription>
      <Employerwebsite>https://www.forwardnetworks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/forwardnetworks/jobs/6221411003</Applyto>
      <Location>Santa Clara, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dc8cf321-418</externalid>
      <Title>Staff Software Engineer, Backend (AI Agent Integrations)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI.</p>
<p>Cresta&#39;s AI Agent team is building enterprise-grade AI Agents that can operate inside real-world contact center environments. A critical part of that mission is enabling our AI Agents to seamlessly integrate with customers&#39; CCaaS platforms (Contact Center as a Service), including voice and digital channels , and to smoothly transition conversations between AI and human agents when needed.</p>
<p>This team is focused on building the backend systems that allow our AI Agents to:</p>
<ul>
<li>Integrate deeply with leading CCaaS platforms</li>
<li>Participate in live customer conversations across voice and chat</li>
<li>Maintain full conversation state and context</li>
<li>Perform real-time actions within the CCaaS ecosystem</li>
<li>Seamlessly hand off conversations to human agents , without losing context, history, or workflow state</li>
<li>Support human agents with AI assistance after transfer</li>
</ul>
<p>We are looking for strong backend engineers who want to work at the intersection of distributed systems, real-time communication, enterprise integrations, and AI Agent orchestration.</p>
<p>As a Staff Backend Engineer, you will lead the architecture and technical direction of Cresta’s AI Agent integration platform. You will define how our AI Agents connect to, operate within, and scale across complex enterprise ecosystems.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the architecture and evolution of Cresta’s AI Agent integration framework across CCaaS platforms</li>
<li>Design scalable, extensible backend systems that manage real-time conversation state, session lifecycle, and context propagation</li>
<li>Establish architectural patterns for AI-to-human handoff that ensure durability, reliability, and seamless customer experience</li>
<li>Define integration strategies for voice, chat, messaging, routing, and agent desktop APIs across enterprise platforms</li>
<li>Drive system design for high availability, low latency, and fault tolerance in real-time environments</li>
<li>Set standards for observability, monitoring, incident response, and operational excellence</li>
<li>Partner closely with ML engineers to operationalize AI Agent capabilities into production-grade systems</li>
<li>Influence technical roadmap and prioritization in collaboration with engineering leadership</li>
<li>Mentor senior engineers and raise the bar for backend engineering excellence across the organization</li>
<li>Lead complex cross-team technical initiatives from design through production rollout</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Bachelor’s degree in Computer Science or related field</li>
<li>8+ years of experience building scalable backend systems in production environments</li>
<li>Demonstrated experience leading architecture for large-scale distributed systems</li>
<li>Deep expertise in API design (REST, gRPC) and service-oriented architectures</li>
<li>Strong understanding of real-time communication systems and low-latency system design</li>
<li>Experience designing integrations with third-party enterprise platforms and APIs</li>
<li>Proven track record of driving technical direction across teams</li>
<li>Experience with containerized environments (Kubernetes, Docker)</li>
<li>Experience with cloud platforms such as AWS, GCP, or Azure</li>
<li>Strong expertise in reliability engineering, observability, and enterprise-grade security</li>
<li>Experience with CCaaS platforms, contact center systems, or real-time communications is highly valued</li>
<li>Familiarity with AI Agents, LLM-based systems, or AI orchestration platforms is a strong plus</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs</li>
<li>Paid parental leave to support you and your family</li>
<li>Monthly Health &amp; Wellness allowance</li>
<li>Work from home office stipend to help you succeed in a remote environment</li>
<li>Lunch reimbursement for in-office employees</li>
<li>PTO: 3 weeks in Canada</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>API design, Service-oriented architectures, Real-time communication systems, Low-latency system design, Containerized environments, Cloud platforms, Reliability engineering, Observability, Enterprise-grade security, CCaaS platforms, Contact center systems, Real-time communications, AI Agents, LLM-based systems, AI orchestration platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a technology company that specializes in developing AI-powered contact center solutions.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5137152008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5a4be76f-140</externalid>
      <Title>FBS Marketing Automation &amp; Integration Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>The team is responsible for architecting and maintaining scalable MarTech solutions, with a focus on data integration, customer journey orchestration, and marketing automation. This team operates within the Data, Tech, and Operations tower of the Direct BU.</p>
<p>The Marketing Automation &amp; Integration Engineer centers on the implementation and optimization of a MarTech data flow pattern involving Snowflake, Segment, Braze, and other SaaS platforms. Key responsibilities include:</p>
<ul>
<li>Design and maintain data pipelines between Snowflake, Segment CDP, Braze, and additional platforms</li>
<li>Implement real-time and batch data ingestion strategies</li>
<li>Manage customer event tracking and identity resolution within Segment</li>
<li>Orchestrate personalized marketing campaigns in Braze using dynamic segmentation and behavioral triggers</li>
<li>Ensure data integrity and feedback loops from Braze back into Snowflake via Segment</li>
<li>Automate data transformations and enrichment using scripting languages</li>
<li>Monitor system performance and troubleshoot integration issues across platforms</li>
</ul>
<p>This position comes with competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Segment CDP, Braze, Snowflake, Scripting Languages (Python / JS), Reverse ETL, Data Orchestration Platforms, Customer Data Schema Design, Data modeling and ETL/ELT Pipeline, API Integrations / Webhooks, Customer journey mapping and automation logic, Familiarity with insurance industry data and customer lifecycle models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services company that provides IT consulting, systems integration, and business process outsourcing services.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qJr4ny8yGpdyCcPXUusbL6/remote-fbs-marketing-automation-%26-integration-engineer-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b7de618e-5e1</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>Join our Site Reliability Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Site Reliability Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking SREs who are passionate about building and maintaining resilient systems at scale. Your mission will be to design and implement robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability and performance.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and Implement Observability Solutions: Develop comprehensive monitoring and alerting systems using modern observability tools. Create dashboards and metrics that provide real-time visibility into system health and performance. Implement logging strategies that enable quick problem identification and resolution.</li>
</ul>
<ul>
<li>Drive Automation and Infrastructure as Code: Architect and implement infrastructure automation solutions using tools like Terraform, Ansible, or Pulumi. Design and maintain CI/CD pipelines that enable reliable and consistent deployments. Create self-healing systems that can automatically respond to common failure scenarios.</li>
</ul>
<ul>
<li>Establish SLOs and SLIs: Work with product and engineering teams to define and implement Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Build systems to track and report on these metrics, ensuring we maintain high reliability standards while balancing innovation speed.</li>
</ul>
<ul>
<li>Incident Management and Response: Lead incident response efforts, conducting thorough post-mortems, and implementing improvements to prevent future occurrences. Develop and maintain runbooks for critical services. Build tools and processes that reduce Mean Time To Recovery (MTTR).</li>
</ul>
<ul>
<li>Performance Optimization: Identify and resolve performance bottlenecks across our infrastructure. Implement capacity planning strategies and optimize resource utilization. Work on reducing latency and improving system efficiency across global regions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4-8 years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering)</li>
</ul>
<ul>
<li>Strong programming skills in languages commonly used for automation (Python, Go, or similar)</li>
</ul>
<ul>
<li>Deep understanding of distributed systems</li>
</ul>
<ul>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies</li>
</ul>
<ul>
<li>Proven track record of implementing and maintaining monitoring/observability solutions</li>
</ul>
<ul>
<li>Strong incident management skills with experience leading incident response</li>
</ul>
<ul>
<li>Experience with infrastructure as code and configuration management tools</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with Google Cloud Platform (GCP) services and tools</li>
</ul>
<ul>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.)</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Problem-solving mindset: Ability to approach complex operational challenges systematically and devise effective solutions</li>
</ul>
<ul>
<li>Self-directed and autonomous: Capable of working independently while collaborating effectively with cross-functional teams</li>
</ul>
<ul>
<li>Strong communication skills: Ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<ul>
<li>Continuous learning: Passion for staying current with industry best practices and new technologies</li>
</ul>
<ul>
<li>Focus on automation: Strong belief in automating repetitive tasks and building self-healing systems</li>
</ul>
<p><strong>Full-Time Employee Benefits Include</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
</ul>
<ul>
<li>401(k) Program with a 4% match</li>
</ul>
<ul>
<li>Health, Dental, Vision and Life Insurance</li>
</ul>
<ul>
<li>Short Term and Long Term Disability</li>
</ul>
<ul>
<li>Paid Parental, Medical, Caregiver Leave</li>
</ul>
<ul>
<li>Commuter Benefits</li>
</ul>
<ul>
<li>Monthly Wellness Stipend</li>
</ul>
<ul>
<li>Autonomous Work Environment</li>
</ul>
<ul>
<li>In Office Set-Up Reimbursement</li>
</ul>
<ul>
<li>Flexible Time Off (FTO) + Holidays</li>
</ul>
<ul>
<li>Quarterly Team Gatherings</li>
</ul>
<ul>
<li>In Office Amenities</li>
</ul>
<p><strong>Want to Learn More About What We Are Up To?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
</ul>
<ul>
<li>Replit: Make an app for that</li>
</ul>
<ul>
<li>Replit Blog</li>
</ul>
<ul>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
</ul>
<ul>
<li>Reasons not to work at Replit</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160K - $250K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading provider of software development tools.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/f6e6158e-eb89-4008-81ea-1b7512bc509d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>323bc85d-b69</externalid>
      <Title>Staff Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Staff Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Drive Automation and Infrastructure as Code: Architect, build, and improve automation to eliminate toil and operational work. Design and maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
</ul>
<ul>
<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks, implement capacity planning strategies, and reduce latency across global regions.</li>
</ul>
<ul>
<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>
</ul>
<ul>
<li>Drive Cross-Company Improvements: Partner directly with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>
</ul>
<ul>
<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the entire engineering lifecycle, from local development to production monitoring.</li>
</ul>
<ul>
<li>Debug and Harden Systems: Dive deep into debugging extremely difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>
</ul>
<ul>
<li>Provide Staff-Level Guidance: Review feature and system designs, acting as an owner for the security, scale, and operational integrity of those designs.</li>
</ul>
<ul>
<li>Educate and Mentor: Educate, mentor, and hold accountable the engineering team to improve the reliability of our systems, making reliability a core value of the Replit engineering culture.</li>
</ul>
<ul>
<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>
</ul>
<p><strong>Required Skills and Experience:</strong></p>
<ul>
<li>8-10 years of experience in Infrastructure Engineering or similar roles (DevOps, Systems Engineering, Site Reliability Engineering).</li>
</ul>
<ul>
<li>Strong programming skills in languages like Python or Go.</li>
</ul>
<ul>
<li>You write high-quality, well-tested code.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems. You&#39;ve designed, built, scaled, and maintained production services and know how to compose a service-oriented architecture.</li>
</ul>
<ul>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>
</ul>
<ul>
<li>Proven track record of implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>
</ul>
<ul>
<li>Strong incident management skills with experience leading incident response and demonstrated critical thinking under pressure.</li>
</ul>
<ul>
<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly and simply and a bias toward open, transparent cultural practices.</li>
</ul>
<ul>
<li>Strong interpersonal skills, with experience working with engineers from junior to principal levels.</li>
</ul>
<ul>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
</ul>
<ul>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Deep experience with Google Cloud Platform (GCP) services and tools.</li>
</ul>
<ul>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>
</ul>
<ul>
<li>Experience designing and building reliable systems capable of handling high throughput and low latency.</li>
</ul>
<ul>
<li>Experience with Go and Terraform.</li>
</ul>
<ul>
<li>Familiarity with working in rapid-growth environments.</li>
</ul>
<ul>
<li>Experience writing company-facing blog posts and training materials.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
</ul>
<ul>
<li>401(k) Program with a 4% match</li>
</ul>
<ul>
<li>Health, Dental, Vision and Life Insurance</li>
</ul>
<ul>
<li>Short Term and Long Term Disability</li>
</ul>
<ul>
<li>Paid Parental, Medical, Caregiver Leave</li>
</ul>
<ul>
<li>Commuter Benefits</li>
</ul>
<ul>
<li>Monthly Wellness Stipend</li>
</ul>
<ul>
<li>Autonomous Work Environment</li>
</ul>
<ul>
<li>In Office Set-Up Reimbursement</li>
</ul>
<ul>
<li>Flexible Time Off (FTO) + Holidays</li>
</ul>
<ul>
<li>Quarterly Team Gatherings</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$220K – $325K</Salaryrange>
      <Skills>Infrastructure Engineering, DevOps, Systems Engineering, Site Reliability Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog, Go, Terraform, Rapid-growth environments, Company-facing blog posts, Training materials</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/6481ec1e-527c-4c1f-a041-2fb5021e7bd5</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>8481d62a-9bf</externalid>
      <Title>Software Engineer, Reliability</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Reliability</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p><strong>Job Description</strong></p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>Join the engineering teams that bring OpenAI’s ideas safely to the world!!</p>
<p>The Applied Engineering team works across research, engineering, product, and design to bring OpenAI’s technology to consumers and businesses. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth.</p>
<p><strong>About the Role</strong></p>
<p>As OpenAI continues to grow, we are looking for experienced, problem-solving engineers to ensure our systems scale. Our success depends on our ability to quickly iterate on products while also ensuring that they are performant and reliable. You will work in a deeply iterative, collaborative, fast-paced environment to bring our technology to millions of users around the world, and ensure it’s delivered with safety and reliability in mind. Successful candidates will play a crucial role in ensuring the reliability, scalability, and performance of our systems as we continue to expand. As a reliability expert, you will be at the forefront of maintaining and enhancing the stability, scalability, and performance of our rapidly evolving infrastructure. You will work closely with cross-functional teams, including software engineers, product managers, and data scientists, to build and maintain resilient systems that can handle our growing user base and workload.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and implement solutions to ensure the scalability of our infrastructure to meet rapidly increasing demands.</li>
</ul>
<ul>
<li>Build and maintain the load, chaos and synthetic testing software leveraged by development teams to make the systems they design and operate more reliable.</li>
</ul>
<ul>
<li>Build and maintain automation tools to streamline repetitive tasks and improve system reliability.</li>
</ul>
<ul>
<li>Build and maintain the platform for CPU/storage, GPU, and network lifecycle management to drive efficiency, accountability and support dynamic optimization of our resources.</li>
</ul>
<ul>
<li>Implement fault-tolerant and resilient design patterns to minimize service disruptions.</li>
</ul>
<ul>
<li>Develop and maintain service level objectives (SLOs) and service level indicators (SLIs) to measure and ensure system reliability.</li>
</ul>
<ul>
<li>Partner with researchers, engineers, product managers, and designers to bring new features and research capabilities to the world.</li>
</ul>
<ul>
<li>Participate in an on-call rotation to respond to critical incidents and ensure 24/7 system availability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a track record of accelerating engineering reliability by empowering your fellow engineers with excellent tooling and systems.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Enjoy seeking out and addressing bottlenecks and areas for performance improvement in our systems.</li>
</ul>
<ul>
<li>Utilize Infrastructure as Code (IaC) principles to automate infrastructure provisioning and configuration management.</li>
</ul>
<ul>
<li>Are experienced in collaborating with cross-functional teams to ensure that reliability and scalability are considered in the design and development of new features and services.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Technology, or a related field (or equivalent work experience).</li>
</ul>
<ul>
<li>Proven experience as an SWE focused on reliability or a similar role in a fast-paced, rapidly scaling company.</li>
</ul>
<ul>
<li>Strong proficiency in cloud infrastructure.</li>
</ul>
<ul>
<li>Proficiency in programming languages.</li>
</ul>
<ul>
<li>Experience with containerization technologies and container orchestration platforms like Kubernetes.</li>
</ul>
<ul>
<li>Knowledge of IaC tools such as Terraform or CloudFormation.</li>
</ul>
<ul>
<li>Excellent problem-solving and troubleshooting skills.</li>
</ul>
<ul>
<li>Strong communication and collaboration skills.</li>
</ul>
<ul>
<li>Experience with</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230K – $490K</Salaryrange>
      <Skills>cloud infrastructure, programming languages, containerization technologies, container orchestration platforms, IaC tools, problem-solving and troubleshooting skills, communication and collaboration skills, Infrastructure as Code (IaC) principles, automated infrastructure provisioning and configuration management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/1faee5e7-3b2f-4d8c-9a6f-ff0f2a4a42a7</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>3f16d353-491</externalid>
      <Title>Software Engineer, Infrastructure Reliability</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Infrastructure Reliability</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$255K – $385K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We’re hiring Software Engineers to join our Applied Infrastructure organization, and more specifically for our Database Systems and Online Storage teams. These teams operate with a high degree of autonomy and are deeply collaborative, with a shared mandate to raise the bar on safety, reliability, and velocity across OpenAI.</p>
<p><strong>About the Role</strong></p>
<p>You’ll be at the heart of scaling and hardening the infrastructure that powers some of the most widely used AI systems in the world. You’ll help ensure our systems are highly reliable, observable, performant, and secure—so researchers can iterate quickly, and products like ChatGPT and the OpenAI API can serve millions of users safely and effectively.</p>
<p>This is a hands-on, high-leverage role for engineers who thrive on ownership, love solving deep technical problems across the stack, and want to work on systems that support cutting-edge research and deploy at global scale. You’ll play a key part in shaping technical direction, proactively improving system resilience, and collaborating closely with infra, product, and research teams to turn complex infrastructure into reliable platforms.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Design, build, and operate reliable and performant systems used across engineering.</li>
</ul>
<ul>
<li>Identify and fix performance bottlenecks and inefficiencies, ensuring our infrastructure can scale to the next order of magnitude.</li>
</ul>
<ul>
<li>Dig deep to resolve complex issues.</li>
</ul>
<ul>
<li>Continuously improve automation to reduce manual work. Improve internal tooling and our developer experience.</li>
</ul>
<ul>
<li>Contribute to incident response, postmortems, and the development of best practices around system reliability and scalability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a deep understanding of distributed systems principles and a proven track record in building and operating scalable and reliable systems.</li>
</ul>
<ul>
<li>Have a keen eye for performance and optimization. You know how to squeeze the most performance out of complex, globally-distributed systems.</li>
</ul>
<ul>
<li>Have experience operating orchestration systems such as Kubernetes at scale and building abstractions over cloud platforms</li>
</ul>
<ul>
<li>Are comfortable working in Linux environments, and with tools like Kubernetes, Terraform, CI/CD pipelines, and modern observability stacks.</li>
</ul>
<ul>
<li>Are experienced in collaborating with cross-functional teams to ensure that reliability and scalability are considered in the design and development of new features and services.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapid change.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>4+ years of relevant industry experience, with 2+ years leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>A passion for distributed systems at scale with a focus on reliability, scalability, security, and continuous improvement.</li>
</ul>
<ul>
<li>Proven experience as an reliability engineer, production engineer, or a similar role in a fast-paced, rapidly scaling company.</li>
</ul>
<ul>
<li>Strong proficiency in cloud infrastructure (like AWS, GCP, Azure) and IaC tools such as Terraform. Proficiency in programming / scripting languages.</li>
</ul>
<ul>
<li>Experience with containerization technologies and container orchestration platforms like Kubernetes.</li>
</ul>
<ul>
<li>Experience with observability tools such as Datadog, Prometheus, Grafana, Splunk and ELK stack.</li>
</ul>
<ul>
<li>Experience with microservices architecture and service mesh technologies.</li>
</ul>
<ul>
<li>Knowledge of security best practices in cloud environments.</li>
</ul>
<ul>
<li>Strong understanding of distributed systems, networking, and database technologies.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and ability to work in a fast-paced environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company that aims to develop and apply general-purpose technologies to align with human values.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$255K – $385K</Salaryrange>
      <Skills>cloud infrastructure, IaC tools, programming/scripting languages, containerization technologies, container orchestration platforms, observability tools, microservices architecture, service mesh technologies, security best practices, distributed systems, networking, database technologies, Kubernetes, Terraform, Datadog, Prometheus, Grafana, Splunk, ELK stack</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company that aims to develop and apply general-purpose technologies to align with human values.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/779b340d-e645-4da1-a923-b3070a26d936</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4a7597fd-d7a</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential.</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global company that creates cutting-edge products and experiences that define the ultimate gameplay. They are guided by their mission &apos;For Gamers. By Gamers.&apos; and are relentlessly pushing boundaries and leading the charge in AI for gaming, shaping the future of the industry.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-01-01</Postedate>
    </job>
    <job>
      <externalid>e5eb908e-6f9</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Vector/Graph;Neo4j, data engineering, AI/ML-driven data architectures, Python, SQL, Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>