<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>812a388c-bab</externalid>
      <Title>Senior Multi-GPU Signal Processing and System Architecture Engineer</Title>
      <Description><![CDATA[<p>As a Senior Multi-GPU Signal Processing and System Architecture Engineer, you will design and implement real-time signal-processing subsystems that convert physics-based channel descriptions into received signals for large numbers of emulated devices, across systems of potentially thousands of interconnected GPUs.</p>
<p>You will work on foundational technology for 5G and 6G network simulation, using NVIDIA&#39;s world-class compute and interconnect platforms. Your expertise will be crucial in architecting the inter-cell data-flow layer, ensuring that the information each cell needs to model interference from its neighbors is compressed, transported, and consumed within the available NVLink and NIC budgets at scale.</p>
<p>You will collaborate with the propagation engine and RAN stack teams to orchestrate the end-to-end simulation pipeline, ensuring that propagation updates, channel application, and stack execution remain synchronized across hundreds or thousands of GPUs. You will assess design and implementation trade-offs between physical fidelity, latency, and system scalability.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>PhD in high-performance computing, computer architecture, signal processing, or wireless communications (or equivalent experience)</li>
<li>12+ years of proven experience</li>
<li>Proficiency in CUDA kernel design with attention to memory hierarchy, register pressure, and HBM bandwidth planning, with a track record of writing production-quality GPU code that meets hard real-time deadlines</li>
<li>Demonstrated ability to build and reason about data flows across multi-device GPU systems (NVLink, NIC/RDMA) with explicit bandwidth and latency accounting</li>
<li>Working knowledge of OFDM signal processing and the 5G NR physical layer, sufficient to implement and validate a channel-emulation pipeline</li>
<li>Impactful publications involving GPU-accelerated numerical workloads or real-time system design</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience with GPU-accelerated RAN platforms, L1/L2 software stacks, or channel emulators</li>
<li>Knowledge of high-bandwidth GPU interconnects (NVLink, NVSwitch) and their scaling properties</li>
<li>Familiarity with massive MIMO beamformer design and MU-MIMO precoding</li>
</ul>
<p>If you&#39;re eager to contribute to crafting the future of telecommunications and meet the above qualifications, we&#39;d love to hear from you. Submit your application and join NVIDIA as we continue to push the boundaries of what&#39;s possible.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CUDA kernel design, Memory hierarchy, Register pressure, HBM bandwidth planning, GPU-accelerated numerical workloads, Real-time system design, OFDM signal processing, 5G NR physical layer, GPU-accelerated RAN platforms, L1/L2 software stacks, Channel emulators, High-bandwidth GPU interconnects, Massive MIMO beamformer design, MU-MIMO precoding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA designs and manufactures graphics processing units (GPUs) and high-performance computing hardware.</Employerdescription>
      <Employerwebsite>https://www.nvidia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-Multi-GPU-Signal-Processing-and-System-Architecture-Engineer_JR2016090</Applyto>
      <Location>Santa Clara</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>9d30cfce-beb</externalid>
      <Title>Security Engineer - Azure Government</Title>
      <Description><![CDATA[<p>We are seeking a skilled Azure Security Engineer to design, implement, and maintain robust security controls across our Azure Gov Cloud environment. In this hands-on role, you will build, strengthen, and maintain our cloud security posture, protect critical workloads, and collaborate with engineering, DevOps, and compliance teams to embed security throughout the development lifecycle.</p>
<p>Key responsibilities include: Implementing, designing, and managing security architecture for Azure Government and Commercial deployments. Configuring and optimising Microsoft Defender for Cloud, Microsoft Sentinel, Microsoft Defender for Endpoint, and related services for threat detection, vulnerability management, and automated response. Designing and enforcing identity &amp; access management using Microsoft Entra ID, Privileged Identity Management (PIM), Conditional Access policies, RBAC, and just-in-time access. Securing network architectures with Azure Firewall, Network Security Groups (NSGs), DDoS Protection, Web Application Firewall (WAF), Network Watcher, and private endpoints. Protecting data at rest and in transit via Azure Key Vault, encryption strategies, data classification, and information protection controls. Developing and maintaining security policies, initiatives, and blueprints using Azure Policy and Microsoft Purview for compliance (NIST, FedRAMP, CMMC, STIGs, etc.). Performing threat hunting, incident response, and forensics using Sentinel playbooks, Log Analytics, and KQL queries. Conducting security reviews of Infrastructure as Code (IaC), containers, Kubernetes (AKS), and serverless workloads. Collaborating with developers and architects to implement DevSecOps practices, including secure CI/CD pipelines, code scanning, and secure defaults. Monitoring and remediating security findings, reducing attack surface, and improving overall security posture per the Microsoft Cloud Security Benchmark (MCSB). Deploying configurations and compliance policies to Azure AVD endpoints using Intune and other Azure native services.</p>
<p>Basic qualifications include: Active U.S. security clearance (e.g., Secret, Top Secret) or eligibility to obtain one. 3+ years of experience in cloud security, cybersecurity engineering, or related roles (with strong Azure focus). Deep hands-on expertise with core Azure security services: Microsoft Defender suite, Sentinel, Intune, Entra ID, Key Vault, Azure Policy, Firewall, Network Watcher, and Purview. Strong understanding of DLP implementation both in cloud and on endpoints utilising Purview and other Microsoft native controls. Experience implementing security in hybrid/multi-cloud environments. Proficiency in scripting/automation (PowerShell, Azure CLI, Bicep/ARM templates, Terraform). Strong understanding of identity federation, zero-trust principles, encryption, network security, and vulnerability management. Familiarity with compliance frameworks (NIST, FedRAMP, CMMC, STIGs, etc.) and regulatory requirements. Excellent problem-solving, analytical, and communication skills. Strong verbal and written communication skills and the ability to stay composed under pressure.</p>
<p>Preferred skills and experience include: Microsoft Certified: Azure Security Engineer Associate (AZ-500), Microsoft Cybersecurity Architect (SC-100). Additional relevant certifications (e.g., CISSP, CCSP, Microsoft Certified: Azure Administrator, AWS Security Specialty, SANS GCPS, SANS GCAD). Deep experience with detection and response engineering and SOC operations. Knowledge of container security (Docker, AKS), secure DevOps, or AI/ML workload protection. Prior experience in government regulations frameworks such as FedRAMP and CMMC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Azure Security Engineer, Microsoft Defender for Cloud, Microsoft Sentinel, Microsoft Defender for Endpoint, Azure Key Vault, Azure Policy, Microsoft Purview, Identity &amp; Access Management, Network Security, Data Loss Prevention, Compliance Frameworks, Cloud Security Posture Management, Threat Hunting, Incident Response, Forensics, Infrastructure as Code, Containers, Kubernetes, Serverless Workloads, DevSecOps, CI/CD Pipelines, Code Scanning, Secure Defaults, Microsoft Cloud Security Benchmark, Microsoft Certified: Azure Security Engineer Associate (AZ-500), Microsoft Cybersecurity Architect (SC-100), CISSP, CCSP, Microsoft Certified: Azure Administrator, AWS Security Specialty, SANS GCPS, SANS GCAD, Detection and Response Engineering, SOC Operations, Container Security, Secure DevOps, AI/ML Workload Protection, Government Regulations Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated, with a flat organisational structure.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5050657007</Applyto>
      <Location>Palo Alto, CA; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>955a1285-ace</externalid>
      <Title>Staff Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Forward Deployed Engineer to join our team in San Francisco. As a Forward Deployed Engineer, you will work directly with enterprise customers to help them deploy, scale, and operationalize their AI workloads on Fal. This is a highly technical, customer-facing role where you&#39;ll act as the bridge between Sales, Product and Infrastructure teams.</p>
<p>You&#39;ll join customer calls, deeply understand their architecture and needs, and translate those into actionable implementation plans and product requirements. You will be responsible for unblocking customer deployments, accelerating onboarding, and ensuring enterprise accounts successfully reach production fast.</p>
<p>This is a role for someone who loves solving real-world engineering problems and wants direct ownership over outcomes that drive revenue and product growth.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Join enterprise onboarding calls and act as the technical owner for deployments</li>
<li>Help customers integrate their models into Fal Serverless (APIs, scaling, observability, deployment workflows)</li>
<li>Debug customer issues end-to-end across frontend, backend, and infra layers</li>
<li>Translate customer feedback into clear product specs, tasks, and engineering priorities</li>
<li>Work closely with Product + Infra to ensure enterprise needs are shipped into the platform</li>
<li>Build custom proofs-of-concept or lightweight integrations to unblock adoption</li>
<li>Identify repeatable patterns across customers and turn them into reusable product features</li>
<li>Improve internal tooling, onboarding flows, and docs based on real customer pain points</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong engineering background (Proficiency with TypeScript, Python, Postgres, and Next.js)</li>
<li>Experience working with customers in a technical capacity (Solutions Engineer, Forward Deployed Engineer, DevRel Engineer, or similar)</li>
<li>Comfortable jumping into ambiguous customer problems and finding solutions fast</li>
<li>Ability to understand complex systems and communicate clearly with both technical and non-technical stakeholders</li>
<li>Strong written communication skills (turning customer conversations into actionable specs/tasks)</li>
<li>Experience working across APIs, infrastructure, and cloud environments</li>
<li>High ownership mentality: you take responsibility for customer success end-to-end</li>
<li>Comfort operating in a fast-moving, low-process environment</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with serverless platforms, infra products, or developer platforms</li>
<li>Familiarity with observability tooling (logs, metrics, tracing)</li>
<li>Background in distributed systems, Kubernetes, or cloud-native deployments</li>
<li>Experience with AI/ML workloads in production</li>
<li>Experience writing documentation, onboarding guides, or customer playbooks</li>
</ul>
<p><strong>Why Join</strong></p>
<ul>
<li>Own the success of Fal&#39;s most important enterprise deployments</li>
<li>Work on a product used at massive scale with real production workloads</li>
<li>Direct influence over product roadmap through customer feedback loops</li>
<li>High autonomy and visibility across Product, Infra, and Sales leadership</li>
<li>Be a foundational member of a rapidly growing product vertical</li>
<li>Work at one of the fastest-growing AI startups, helping shape a new category</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Interesting and challenging work</li>
<li>Competitive salary and equity</li>
<li>A lot of learning and growth opportunities</li>
<li>We offer visa sponsorship and will help you relocate to San Francisco.</li>
<li>Health, dental, and vision insurance (US)</li>
<li>Regular team events and offsite</li>
</ul>
<p><strong>Compensation</strong></p>
<p>$150,000 - $230,000 + equity + comprehensive benefits package</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $230,000</Salaryrange>
      <Skills>TypeScript, Python, Postgres, Next.js, Serverless platforms, Infra products, Developer platforms, Observability tooling, Distributed systems, Kubernetes, Cloud-native deployments, AI/ML workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal is an AI startup that builds infrastructure for AI inference. It has reached a $4.5B valuation and has a lean team of ~70 employees.</Employerdescription>
      <Employerwebsite>https://www.fal.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4129387009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>198d64d4-207</externalid>
      <Title>Senior/Staff Site Reliability Engineer</Title>
      <Description><![CDATA[<p>You are a seasoned SRE who keeps production infrastructure running at scale. You own the reliability and availability of customer-facing systems , from Kubernetes clusters to deployment pipelines to the networking layer that connects it all. You think in SLOs, automate ruthlessly, and treat every incident as a chance to make the system better.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Own and operate our Kubernetes infrastructure: cluster lifecycle, upgrades, networking, and multi-tenant isolation for customer workloads</li>
</ul>
<ul>
<li>Build and maintain CI/CD pipelines and deployment infrastructure</li>
</ul>
<ul>
<li>Leverage AI to an extreme level to automate analysis and resolution of production issues, and improve software development speed, reliability and maintainability</li>
</ul>
<ul>
<li>Build dashboards, alerting, and anomaly detection across our systems</li>
</ul>
<ul>
<li>Define and enforce SLOs and build out incident response processes</li>
</ul>
<ul>
<li>Manage and improve our networking, load balancing, and service mesh configurations</li>
</ul>
<ul>
<li>Drive reliability improvements across the stack through automation, runbooks, and chaos engineering</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years experience in managing critical production systems and software development workflows</li>
</ul>
<ul>
<li>Strong production experience setting up and operating Kubernetes at scale, using infrastructure-as-code (Terraform, Ansible)</li>
</ul>
<ul>
<li>Deep knowledge of Linux networking, container networking (CNI plugins, VXLAN, BGP), and DNS</li>
</ul>
<ul>
<li>Experience building CI/CD systems and GitOps workflows (FluxCD, ArgoCD)</li>
</ul>
<ul>
<li>Proficiency in Python and either Go or Bash for tooling and automation</li>
</ul>
<ul>
<li>Strong experience with logging, monitoring and alerting (Prometheus, Grafana, Loki, Thanos, VictoriaMetrics, Datadog)</li>
</ul>
<ul>
<li>Excellent communication and ability to drive technical decisions across teams</li>
</ul>
<ul>
<li>Self-starter who executes quickly, takes ownership, and constantly seeks improvement</li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with managing GPU and AI/ML workloads</li>
</ul>
<ul>
<li>Experience with kernel-based monitoring and routing (eBPF, XDP)</li>
</ul>
<ul>
<li>Experience with security tooling (Falco, Coroot, SIEM)</li>
</ul>
<ul>
<li>Experience with bare metal Kubernetes networking (Calico, Cilium, MetalLB)</li>
</ul>
<ul>
<li>Experience with distributed storage systems (Ceph, Longhorn, etc.)</li>
</ul>
<p><strong>Compensation</strong></p>
<ul>
<li>$180,000-250,000 plus equity + benefits</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Interesting and challenging work</li>
</ul>
<ul>
<li>A lot of learning and growth opportunities</li>
</ul>
<ul>
<li>Regular team events and offsites</li>
</ul>
<ul>
<li>Health, dental, and vision insurance (US)</li>
</ul>
<ul>
<li>Visa sponsorship and relocation assistance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-250,000</Salaryrange>
      <Skills>Kubernetes, Infrastructure-as-code, Linux networking, Container networking, CI/CD systems, GitOps workflows, Python, Go, Bash, Logging, Monitoring, Alerting, GPU and AI/ML workloads, Kernel-based monitoring and routing, Security tooling, Bare metal Kubernetes networking, Distributed storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal is a technology company that operates in the San Francisco area.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4146019009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e01f10da-4e4</externalid>
      <Title>Helpdesk Administrator</Title>
      <Description><![CDATA[<p>The Helpdesk Administrator works alongside M&amp;E Helpdesk and Helpdesk Coordinator to form part of the team responsible for the receiving, allocating and the progression of reactive emergency maintenance faults.</p>
<p>Key responsibilities include: Covering the helpdesk in the absence of Helpdesk Operative Vetting service requests received via CAFM system Analysis of job history/running reports to avoid duplication Ordering parts required for the job Prioritising urgent jobs and plan and dispatch engineers to meet urgent demand Plan/Coordinate work for current and next day Escalating any complaints or issues as required Ensuring Reactive Work to Additional Work process is followed Managing the completion process, reviews, audit fails and ensuring all closures are sent to the client via the CAFM system Collating and submitting SLA extension requests to the client Adhering to all SLAs/KPIs set against your role and including call answering times, quality assurance, email response times if covering the Helpdesk Utilising CAFM system - Obtaining and Providing mitigation for breached Faults for previous 24 hours (ready for period end) Ensuring compliance with statutory and company procedures across all functions Taking reasonable care for the health and safety of themselves and others who may be affected by their acts and omissions and to co-operate with their employer so far as is necessary to enable them to carry out their statutory duty Having high attention to detail on all work submitted Contributing to reducing levels of customer complaints Undertaking other duties as directed by management</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive</Salaryrange>
      <Skills>Exceptional organisational skills, Ability to handle conflicting workloads and to work under pressure, Strong communication skills in both telephone and correspondence/report handling, An excellent telephone manner with the ability to communicate effectively at all levels delivering flawless customer service always, Ability to develop effective relations with key stakeholders including management and customers, Ability to set and achieve targets via effective engagement with stakeholder groups, Previous customer service representative or frontline support role, Experience in using CAFM system or asset management system, Rounded educational background and strong knowledge of Microsoft 365 systems</Skills>
      <Category>Operations</Category>
      <Industry>Facility Management</Industry>
      <Employername>ABM UK</Employername>
      <Employerlogo>https://logos.yubhub.co/abm.com.png</Employerlogo>
      <Employerdescription>ABM is one of the world&apos;s largest providers of integrated facility, engineering, and infrastructure solutions, serving a wide range of market sectors including commercial real estate, aviation, mission critical, and manufacturing and distribution.</Employerdescription>
      <Employerwebsite>https://www.abm.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/aqkVPTcKXFahxFdVvK9VT1/helpdesk-administrator-in-north-greenwich-at-abm-uk</Applyto>
      <Location>North Greenwich</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8e20eaf6-7f6</externalid>
      <Title>Data Operations, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>Own advanced operational support and stability for enterprise data platforms, acting as the primary L2/L3 interface for ETL/ELT pipelines, orchestration, observability, and Snowflake workloads. The role bridges execution and engineering, with accountability for incident resolution, platform reliability, and operational improvement.</p>
<p>Key Responsibilities</p>
<ul>
<li>Own L1/L2 operational support for production data platforms, including data lakes, streaming pipelines, and Snowflake-based analytics.</li>
<li>Diagnose and resolve complex failures in ETL/ELT pipelines and orchestration frameworks, partnering with engineering where required.</li>
<li>Actively manage incidents, including impact assessment, remediation coordination, and post incident documentation.</li>
<li>Improve monitoring, alerting, and observability coverage, identifying gaps and driving instrumentation enhancements.</li>
<li>Support onboarding of new pipelines and data products by validating operational readiness, scalability, and reliability.</li>
<li>Analyze recurring incidents and data quality issues, contributing to root cause analysis (RCA) and long-term remediation.</li>
<li>Mentor analysts through guidance on operational best practices, troubleshooting, and platform behavior.</li>
<li>Contribute to automation initiatives to reduce manual effort and improve operational efficiency.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data platforms, ETL/ELT pipelines, orchestration, observability, Snowflake workloads, AWS, Azure, GCP, cloud-native data services, monitoring, alerting, observability systems</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment management services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/v9REx3w1EEK7y2df1zPkqK/data-operations%2C-associate-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>376da89d-421</externalid>
      <Title>HPC Manager</Title>
      <Description><![CDATA[<p>We are currently looking for an experienced HPC Manager to be responsible for the management, performance, and continuous evolution of the High Performance Computing (HPC) environment supporting CFD workloads and all related services at our UK site in Milton Keynes.</p>
<p>The role ensures maximum availability, performance, and scalability of the CFD compute cluster and its ecosystem, enabling engineering teams to run complex simulations efficiently in a highly competitive, performance-driven environment.</p>
<p>Responsibilities:</p>
<ul>
<li>HPC &amp; CFD Infrastructure Management: Own and manage the CFD HPC cluster, including compute, storage, and high-performance networking; ensure optimal performance and availability of CFD workloads; manage job scheduling, resource allocation, and workload prioritization; oversee performance tuning, benchmarking, and system optimization; maintain and evolve parallel file systems and data pipelines supporting CFD; drive capacity planning and future HPC architecture evolution; willingness to travel occasionally to our UK branch in Milton Keynes (DC site); availability to respond to critical issues affecting the computing cluster, including during weekends when necessary.</li>
</ul>
<ul>
<li>Collaboration with Engineering: Work closely with CFD and engineering teams to optimize simulation workflows; support users in maximizing efficiency of HPC resources; act as primary point of contact for HPC-related topics in the UK site.</li>
</ul>
<ul>
<li>Operations &amp; Reliability: Ensure 24/7 reliability of HPC services supporting CFD activities; implement monitoring, alerting, and automation; lead troubleshooting of complex system and performance issues; manage software stack, compilers, libraries, and tools used in CFD environments.</li>
</ul>
<ul>
<li>Leadership &amp; Continuous Improvement: Lead and develop a team of HPC engineers/administrators; define best practices, documentation, and operational procedures; continuously evaluate new technologies (GPU, cloud, hybrid HPC); drive efficiency, scalability, and innovation across HPC services.</li>
</ul>
<p>What We Offer:</p>
<ul>
<li>Working in a young, collaborative and international environment.</li>
<li>Tailored training.</li>
<li>Company Events / Briefings.</li>
<li>On site Gym.</li>
<li>Bonus scheme.</li>
<li>Annual salary review process.</li>
<li>Meal Tickets.</li>
<li>Free additional health insurance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux, cluster management, HPC schedulers, InfiniBand, low-latency networking, parallel file systems, CFD workloads, simulation environments, performance tuning, optimization, leadership, stakeholder management, English, GPU computing, container technologies, automation, scripting</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Visa Cash App Racing Bulls Formula 1 Team</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.redbull.com.png</Employerlogo>
      <Employerdescription>Red Bull-owned Formula 1 team operating in the UK site in Milton Keynes.</Employerdescription>
      <Employerwebsite>https://jobs.redbull.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.redbull.com/gb-en/milton-keynes-vcarb-f1-team-hpc-manager-prv-ref30239o</Applyto>
      <Location>Milton Keynes</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6f833620-2d5</externalid>
      <Title>Principal ML Platform Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Engineer to join the ML Platform team at Synthesia. Our team builds and operates the systems that allow researchers and product teams to train, serve, and deploy generative models reliably and efficiently. This includes research infrastructure, production serving systems, internal tooling, and the platform interfaces that connect them.</p>
<p>As a Principal Engineer, you&#39;ll design and improve the platform systems that support model training, evaluation, and production serving. You&#39;ll build infrastructure and tooling that make ML workloads more reliable, scalable, and cost-efficient. You&#39;ll develop internal tools and workflows that are easy to operate both by humans and by agents.</p>
<p>You&#39;ll work on the architecture behind how models are deployed, served, and operated across research and product environments. You&#39;ll improve how we schedule, monitor, and debug workloads running on GPUs and cloud infrastructure. You&#39;ll develop internal tools and abstractions and agentic systems that reduce operational overhead for researchers and engineers.</p>
<p>You&#39;ll drive improvements across observability, automation, reliability, and developer experience. You&#39;ll collaborate closely with researchers and product engineers to understand pain points and turn them into robust platform capabilities. You&#39;ll contribute to technical direction and make pragmatic architectural tradeoffs as the platform grows.</p>
<p>We&#39;re looking for a strong generalist with a systems mindset: someone who is comfortable working across infrastructure, backend systems, and tooling, and who has seen ML systems in practice. This is not a pure ML Engineer role. We&#39;re especially interested in people who think deeply about reliability, scalability, performance, and resource efficiency in complex production environments.</p>
<p>This is a hands-on IC role with significant ownership. You&#39;ll help shape how our ML platform evolves as we scale the number of models, workloads, tools and teams relying on it.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud infrastructure, Linux, infrastructure automation, Kubernetes, distributed workloads, Python, backend systems, tooling, observability, debugging, Terraform, Datadog, GitHub Actions, agentic systems, LLM-powered internal tools, workflow orchestration, performance optimization, scheduling, resource allocation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.ai.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. It was founded in 2017 and is headquartered in London.</Employerdescription>
      <Employerwebsite>https://synthesia.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/e9c63d3d-13cc-4049-ae0a-5fef402c595b</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8c2f4139-fca</externalid>
      <Title>AWS Cloud Engineer (m/w/d) for Racing Applications</Title>
      <Description><![CDATA[<p>Become part of a unique company within a fast-moving industry. We are looking for a Cloud Engineer to join our application development and works motorsport team in Cologne, Germany. In this role within the IT department, you will be responsible for developing and maintaining our AWS cloud infrastructure, supporting race team operations and data analytics. Partial remote work is possible.</p>
<p>Exciting projects and a place for technical freedom and innovation to get things moving. Attractive benefits packages, including competitive remuneration, social benefits, 30 days annual holiday, car leasing, and free on-site gym.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and manage Hybrid AWS infrastructure using Terraform across multiple accounts and regions</li>
<li>Take part in crucial software architecture decisions</li>
<li>Maintain secure and scalable hybrid setups between Edge, Cloud and On-Prem</li>
<li>Maintain critical infrastructure in the AWS Cloud</li>
<li>Support CI/CD pipelines and Infrastructure automation for internal software applications</li>
<li>Collaborate with software developers and engineers to enable secure cloud-native solutions for motor-sport analytics, APIs, and AI workloads</li>
<li>Possibilities to participate in the full software development life cycle</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor/ Master’s degree or equivalent in computer science, cloud engineering, or a related field</li>
<li>Relevant hands-on experience with AWS and Infrastructure as Code (IAC) in a production environment</li>
<li>Familiarity with AWS services such as EKS, ECS, VPC, EC2, IAM, CloudTrail, Athena, RDS, ECR, and S3</li>
<li>Understanding of CI/CD workflows and DevOps practices in cloud-native environments</li>
<li>Experience with containerized workloads (K8s and Docker)</li>
<li>Fluency in English; German language skills are a plus</li>
<li>Experience with one or more of the following would be an advantage: Terraform, Azure DevOps Pipelines, Python, AI/ML workloads, Data Engineering, Networking</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, Infrastructure as Code (IAC), EKS, ECS, VPC, EC2, IAM, CloudTrail, Athena, RDS, ECR, S3, CI/CD, DevOps, containerized workloads, K8s, Docker, Terraform, Azure DevOps Pipelines, Python, AI/ML workloads, Data Engineering, Networking</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>TOYOTA RACING</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tgr-europe.com.png</Employerlogo>
      <Employerdescription>TOYOTA RACING is a company within the fast-moving motorsport industry, employing over 500 people.</Employerdescription>
      <Employerwebsite>https://careers.tgr-europe.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tgr-europe.com/job/Cologne-AWS-Cloud-Engineer-%28mwd%29-for-Racing-Applications-NW-50858/1346005755/</Applyto>
      <Location>Cologne</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>377e69db-df1</externalid>
      <Title>Database Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a database engineer with deep experience building and scaling both structured and unstructured database platforms supporting distributed systems, data-intensive applications, and machine learning infrastructure.</p>
<p>As a member of the Platform team, you will build and mature database foundations for Scale, leveraging industry-standard platforms. You will collaborate with stakeholders across the organisation, including software developers, platform engineers, machine learning scientists, customer operations, etc.</p>
<p>Key responsibilities include:</p>
<p>Building and maintaining high-performance database systems Collaborating with cross-functional teams to design and implement scalable database solutions Developing and optimising database queries and indexing strategies Ensuring data consistency and integrity across multiple systems Mentoring junior engineers and contributing to the growth of the team Improving engineering standards, tooling, and processes Working directly with engineering and sales teams to create backend database solutions to meet their challenging data and security needs Working with the Security Team on security compliance, pen tests, and mitigations that improve security across Scale Building systems capable of handling millions of frames of data every day, making it available to both our workforce and our internal teams with high availability.</p>
<p>This role requires:</p>
<p>5+ years of industry experience as a database engineer post-graduation Engineering experience with building real-time and distributed system architecture Experience designing and self-hosting databases on industry-standard public cloud platforms Deep familiarity with design, architecture, optimisation, and tuning multiple database platforms such as MongoDB, Postgres, MySQL, DynamoDB, Redis Deep familiarity with SQL query optimisation, database indexing, scalability (partitioning/sharding), and replication Experience developing and optimising backup and restore functionality to meet RTO goals Intermediate experience in at least one coding language: Typescript, Python, Go, Java, C++ Experience working with Docker, Kubernetes, and Infra-as-Code (e.g. Terraform); bonus points for experience supporting GPU/ML workloads.</p>
<p>Nice to haves:</p>
<p>Prior startup experience to help us grow responsibly Experience with AWS, Datadog, ElasticSearch Experience with cloud-based data warehouse solutions like Snowflake or Databricks Experience with cost optimisation strategies and techniques for database platforms Experience developing and designing intermediary data abstraction layers Mentored and grown members of your team or been a tech lead on large projects.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$162,400-$203,000 USD</Salaryrange>
      <Skills>database engineering, distributed systems, data-intensive applications, machine learning infrastructure, SQL query optimisation, database indexing, scalability, partitioning, sharding, replication, backup and restore functionality, Docker, Kubernetes, Infra-as-Code, Terraform, GPU/ML workloads, prior startup experience, AWS, Datadog, ElasticSearch, cloud-based data warehouse solutions, cost optimisation strategies, intermediary data abstraction layers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4688489005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5fd90acd-a0f</externalid>
      <Title>Performance Modeling Engineer ~2</Title>
      <Description><![CDATA[<p>We are seeking a Performance Modeling Engineer to support the development and application of modeling tools used to evaluate AI system performance and inform architectural decisions.</p>
<p>In this role, you will partner closely with Senior Performance Modeling Engineers and the Performance Modeling Lead to analyze system behavior, run simulations and analytical models, and help evaluate tradeoffs across compute, memory, networking, and storage.</p>
<p>This role is ideal for early-career engineers with 1–2 years of experience in software engineering, systems analysis, or performance modeling who are excited to grow in large-scale infrastructure and hardware/software systems.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Support the development and maintenance of performance modeling tools and frameworks</li>
<li>Assist in building models to evaluate system behavior across compute, memory, networking, and interconnect subsystems</li>
<li>Help analyze distributed system scaling behavior and identify performance bottlenecks</li>
<li>Run simulations and analytical models to support architecture and infrastructure decisions</li>
<li>Partner with senior engineers to evaluate design tradeoffs across hardware and system components</li>
<li>Interpret modeling outputs and help translate findings into clear recommendations</li>
<li>Validate models using benchmarking data and real system performance measurements</li>
<li>Improve modeling workflows, documentation, and usability for broader team adoption</li>
<li>Collaborate cross-functionally with hardware, infrastructure, and architecture teams</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>1–2 years of experience in software engineering, systems modeling, performance analysis, or related technical work</li>
<li>Strong programming skills and experience building technical tools, scripts, or frameworks</li>
<li>Familiarity with system architecture fundamentals such as compute, memory, and networking</li>
<li>Ability to reason about system performance, bottlenecks, and scaling behavior</li>
<li>Strong analytical and problem-solving skills with comfort working in quantitative environments</li>
<li>Ability to learn quickly and work effectively across technical teams</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Exposure to AI/ML workloads, distributed systems, or large-scale infrastructure</li>
<li>Experience with simulation tools, benchmarking, profiling, or performance analysis</li>
<li>Familiarity with data center systems, server architecture, or hardware platforms</li>
<li>Interest in system architecture and hardware/software co-design</li>
<li>Internship or early professional experience in performance engineering, infrastructure, or systems design</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$266K – $445K</Salaryrange>
      <Skills>software engineering, systems modeling, performance analysis, technical tools, scripts, frameworks, system architecture, compute, memory, networking, AI/ML workloads, distributed systems, large-scale infrastructure, simulation tools, benchmarking, profiling, data center systems, server architecture, hardware platforms, hardware/software co-design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company that develops and deploys artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4f6be73e-9a1d-4ec6-8b0e-b2af0b4becfb</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d57ab2d-f3b</externalid>
      <Title>Cloud Solution Architect</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow&#39;s transportation.</p>
<p>If you&#39;re looking for the chance to leverage advanced technology to redefine the transportation landscape, enhance the customer experience, and improve people&#39;s lives: this is the opportunity for you. Join us and challenge your IT expertise and analytical skills to help create vehicles that are as smart as you are.</p>
<p>To meet the growing needs of the Customer analytics business, the team is looking for a self-motivated, technically proficient individual to craft and shepherd coherent solutions. This will require collaboration with a range of stakeholders to clarify requirements, establish pragmatic approaches, and support and articulate decisions over time. You will join a cloud architecture team that works closely with engineering teams and other architects across the organisation.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Technical Requirements</strong></p>
<ul>
<li>Extensive experience with Google Cloud Platform (GCP), specifically BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner and Apigee.</li>
</ul>
<ul>
<li>Security &amp; Networking: Strong understanding of cloud security protocols, IAM, encryption, and complex network topologies.</li>
</ul>
<ul>
<li>Data Management: Proficiency in Enterprise Data Platforms, Data mesh architecture and data-driven architectural patterns.</li>
</ul>
<ul>
<li>DevOps Tooling: Hands-on experience with GitHub, SonarQube, Checkmarx, and FOSSA.</li>
</ul>
<ul>
<li>Software Engineering: Strong background in building Web Services and maintaining Clean Code standards.</li>
</ul>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>System Design: Work with engineering teams to refine system designs, evangelising for horizontal scalability, resilience, and Clean Code compliance.</li>
</ul>
<ul>
<li>Product Collaboration: Partner with Product Managers to decompose complex business needs into incremental, production-ready user stories within an Agile/Sprint methodology.</li>
</ul>
<ul>
<li>Architectural Governance: Assess and document the rationale and tradeoffs for technical decisions; contribute to the broader Cloud Architecture team to improve global practices.</li>
</ul>
<ul>
<li>DevOps Excellence: Utilise and improve CI/CD pipelines using GitHub and automated testing/security tools to maximise deployment efficiency and minimise risk.</li>
</ul>
<p><strong>Cloud, Networking &amp; Security</strong></p>
<ul>
<li>Secure Infrastructure: Serve as the primary architect for cloud solutions, ensuring &#39;Secure-by-Design&#39; principles are applied across Google Cloud services (Dataflow, Dataproc, CloudRun, CloudSQL, Spanner).</li>
</ul>
<ul>
<li>Advanced Networking: Design and optimise cloud networking configurations, including VPCs, Service Controls, Load Balancing, and Private Service Connect to ensure high availability and low latency.</li>
</ul>
<ul>
<li>Cyber Security Oversight: Integrate security scanning and compliance into the architecture (utilising Checkmarx, SonarQube, and FOSSA). Proactively address vulnerabilities in distributed systems and AI models (e.g., OWASP Top 10 for LLMs).</li>
</ul>
<ul>
<li>API &amp; Data Contracts: Bolster &#39;Data as a Product&#39; practices by enforcing strict API standards and data contracts to ensure seamless, secure interoperability between services.</li>
</ul>
<ul>
<li>FinOps &amp; Cost Optimisation: Drive fiscal responsibility by right-sizing GCP resources and optimising Generative AI architectures (token management/model selection) to maximise ROI.</li>
</ul>
<ul>
<li>SRE &amp; Performance Tuning: Apply Site Reliability Engineering principles to ensure high availability, minimise system latency, and lead root-cause analysis for complex, distributed system failures.</li>
</ul>
<ul>
<li>DevSecOps &amp; Problem Solving: Integrate security automation into CI/CD pipelines to ensure &#39;Secure-by-Design&#39; deployments while solving complex architectural trade-offs between speed, scale, and risk.</li>
</ul>
<ul>
<li>Continuous Learning: Stay at the forefront of AI research, specifically regarding autonomous agents, prompt engineering etc</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>AI development tools and frameworks (e.g., LangChain, LangGraph, or Agent Dev Kit) to accelerate the delivery of intelligent applications.</li>
</ul>
<ul>
<li>Agentic &amp; GenAI Design: Lead the architectural design of Agentic AI systems (multi-agent orchestration) and Generative AI solutions, including Retrieval-Augmented Generation (RAG) patterns and LLM integration.</li>
</ul>
<ul>
<li>Kubernetes (GKE): Experience managing containerised workloads at scale.</li>
</ul>
<ul>
<li>Kafka/Event-Driven Design: Experience with high-throughput messaging and event-driven architectures.</li>
</ul>
<ul>
<li>MLOps: Familiarity with the end-to-end lifecycle of machine learning models in production.</li>
</ul>
<p><strong>Qualifications</strong></p>
<p><strong>You&#39;ll have...</strong></p>
<ul>
<li>Requires a bachelor&#39;s or foreign equivalent degree in computer science, information technology or a technology related field</li>
</ul>
<ul>
<li>5+ years of Software engineering experience using Java or Python developing services (APIs, REST, etc.)</li>
</ul>
<ul>
<li>2+ years of experience with Google Cloud Platform or other cloud service provider (AWS, Azure, etc.) and associated cloud components.</li>
</ul>
<ul>
<li>Experience designing/architecting and running distributed systems in a production environment</li>
</ul>
<ul>
<li>STRONG communications skills and cognitive agility - ability to engage in deep technical discussions with customers and peers, become a trusted technical advisor, and maintain good documentation</li>
</ul>
<p><strong>Even better, you may have...</strong></p>
<ul>
<li>Master&#39;s degree in computer science, electrical engineering or a closely related field of study</li>
</ul>
<ul>
<li>Familiarity with a breadth of programming languages, platforms, and systems</li>
</ul>
<ul>
<li>Experience with asynchronous messaging and eventually consistent system design</li>
</ul>
<ul>
<li>An agile, pragmatic, and empirical mindset</li>
</ul>
<ul>
<li>Critical thinking, decision-making and leadership aptitudes</li>
</ul>
<ul>
<li>Good organisational and problem-solving abilities</li>
</ul>
<ul>
<li>MDM, Entity Resolution, Customer Analytics and Marketing Analytics experience is a huge plus.</li>
</ul>
<p>You may not check every box, or your experience may look a little different from what we&#39;ve outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply!</p>
<p><strong>As an established global company, we offer the benefit of choice. You can choose what your Ford future will look like: will your story span the globe, or keep you close to home? Will your career be a deep dive into what you love, or a series of new teams and new skills? Will you be a leader, a changemaker, a technical expert, a culture builder…or all of the above? No matter what you choose, we offer a work life that works for you, including:</strong></p>
<ul>
<li>Immediate medical, dental, and prescription drug coverage</li>
</ul>
<ul>
<li>Flexible family care, parental leave, new parent ramp-up programs, subsidised back-up child care and more</li>
</ul>
<ul>
<li>Vehicle discount programme for employees and family members, and management leases</li>
</ul>
<ul>
<li>Tuition assistance</li>
</ul>
<ul>
<li>Established and active employee resource groups</li>
</ul>
<ul>
<li>Paid time off for individual and team community service</li>
</ul>
<ul>
<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>
</ul>
<ul>
<li>Paid time off and the option to purchase additional vacation time.</li>
</ul>
<p><strong>For a detailed look at our benefits, click here:</strong> Benefit Summary</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$115,000-$192,900</Salaryrange>
      <Skills>Google Cloud Platform, BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner, Apigee, Security &amp; Networking, IAM, Encryption, Complex Network Topologies, Data Management, Enterprise Data Platforms, Data Mesh Architecture, Data-Driven Architectural Patterns, DevOps Tooling, GitHub, SonarQube, Checkmarx, FOSSA, Software Engineering, Web Services, Clean Code Standards, System Design, Horizontal Scalability, Resilience, Clean Code Compliance, Product Collaboration, Agile/Sprint Methodology, Architectural Governance, Cloud Architecture, DevOps Excellence, CI/CD Pipelines, Automated Testing/Security Tools, Secure Infrastructure, Secure-by-Design Principles, Cloud Services, Advanced Networking, VPCs, Service Controls, Load Balancing, Private Service Connect, Cyber Security Oversight, Security Scanning, Compliance, Distributed Systems, AI Models, API &amp; Data Contracts, Data as a Product, API Standards, Data Contracts, Seamless Interoperability, FinOps &amp; Cost Optimisation, Fiscal Responsibility, GCP Resources, Generative AI Architectures, Token Management, Model Selection, ROI Maximisation, SRE &amp; Performance Tuning, High Availability, System Latency, Root-Cause Analysis, DevSecOps &amp; Problem Solving, Security Automation, Continuous Learning, AI Research, Autonomous Agents, Prompt Engineering, Kubernetes, Containerised Workloads, Kafka/Event-Driven Design, High-Throughput Messaging, Event-Driven Architectures, MLOps, Machine Learning Models, End-to-End Lifecycle, AI Development Tools, Frameworks, LangChain, LangGraph, Agent Dev Kit, Agentic &amp; GenAI Design, Multi-Agent Orchestration, Generative AI Solutions, Retrieval-Augmented Generation, LLM Integration, Kubernetes (GKE)</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/corporate.ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://corporate.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62370</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9a0bf3cb-901</externalid>
      <Title>Performance Modeling Lead</Title>
      <Description><![CDATA[<p>We are seeking a Performance Modeling Lead to build and lead a small, high-impact team responsible for answering forward-looking architectural questions across AI infrastructure systems.</p>
<p>You will develop modeling frameworks and methodologies to evaluate system-level tradeoffs and guide key design decisions. Your work will directly influence reference architectures, vendor designs, and long-term infrastructure strategy.</p>
<p>This role sits at the intersection of AI workloads, system architecture, and quantitative modeling, and requires strong technical judgment, ownership, and the ability to translate complex analysis into clear, actionable guidance.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build and own a performance modeling framework/toolchain to evaluate AI systems across multiple levels of abstraction.</li>
<li>Analyze and quantify architectural tradeoffs across compute, memory, networking, storage, and system topology.</li>
<li>Develop performance models to guide decisions on:</li>
<li>scale-up vs. scale-out architectures</li>
<li>interconnect and network design</li>
<li>memory hierarchy and system balance.</li>
<li>Translate modeling outputs into clear recommendations for internal teams and external hardware vendors.</li>
<li>Influence reference designs and vendor roadmaps through data-driven insights.</li>
<li>Partner closely with machine learning, systems, and hardware teams to understand workload characteristics and requirements.</li>
<li>Lead and grow a small team (2–3 engineers), setting technical direction and maintaining high standards for modeling rigor.</li>
<li>Continuously improve modeling fidelity by validating against real system behavior and measurements.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Have experience owning or building performance modeling frameworks used to drive real system design decisions.</li>
<li>Have deep knowledge of AI/ML workloads, including training and/or inference at scale.</li>
<li>Understand system-level tradeoffs across compute, memory, and networking in large-scale distributed systems.</li>
<li>Are comfortable working across abstraction layers,from workload behavior to hardware implementation.</li>
<li>Have experience using modeling (analytical or simulation) to inform architectural decisions.</li>
<li>Can operate in ambiguous problem spaces and turn open-ended questions into structured analysis.</li>
<li>Communicate clearly and influence both internal teams and external partners.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience working with hardware vendors (ODM/JDM, silicon, networking).</li>
<li>Background in data center infrastructure or hyperscale systems.</li>
<li>Familiarity with accelerators (GPUs/ASICs) and interconnects (e.g., NVLink, InfiniBand, Ethernet).</li>
<li>Experience influencing hardware roadmaps or reference architectures.</li>
<li>Prior experience leading or mentoring engineers.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$342K – $555K</Salaryrange>
      <Skills>performance modeling, system architecture, quantitative modeling, AI workloads, machine learning, systems engineering, hardware engineering, hardware vendors, data center infrastructure, hyperscale systems, accelerators, interconnects, hardware roadmaps, reference architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f2293c9f-d036-4198-a268-3dad738c8d19</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>459e7356-55d</externalid>
      <Title>3P Architect</Title>
      <Description><![CDATA[<p>We are seeking a 3P Architect to define and drive rack- and cluster-level reference designs in collaboration with external partners. This role is responsible for translating workload requirements and system-level goals into concrete architectures, aligning partners on critical design attributes, and ensuring vendor roadmaps meet our infrastructure needs.</p>
<p>You will work closely with performance modeling and internal architecture teams to evaluate tradeoffs, while owning the end-to-end definition and execution of third-party system designs. This includes identifying gaps in current technologies, driving vendor development, and shaping future infrastructure capabilities.</p>
<p>This role requires strong system intuition, cross-functional leadership, and the ability to operate effectively across internal teams and external ecosystems.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Define rack- and cluster-level reference architectures for AI infrastructure deployments.</li>
<li>Translate workload requirements into clear system design specifications and partner deliverables.</li>
<li>Collaborate with performance modeling teams to evaluate architectural tradeoffs and system behaviors.</li>
<li>Align internal stakeholders and external partners on critical system attributes (performance, cost, power, reliability, scalability).</li>
<li>Identify gaps in current technology offerings and drive vendors (ODM/JDM, silicon, networking) to close those gaps.</li>
<li>Influence and shape vendor roadmaps to meet future infrastructure needs.</li>
<li>Track emerging technologies and evaluate their applicability to AI systems.</li>
<li>Define and lead proof-of-concept (PoC) efforts to validate new architectures and technologies.</li>
<li>Act as a key interface between OpenAI and external partners, ensuring execution against design intent.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Have strong experience in system architecture for large-scale infrastructure or data center environments.</li>
<li>Understand AI workload characteristics and how they map to system-level design decisions.</li>
<li>Are comfortable working with performance modeling outputs to inform architectural direction.</li>
<li>Have experience working with or managing hardware vendors (ODM/JDM, silicon, networking).</li>
<li>Can drive alignment across multiple stakeholders with competing constraints.</li>
<li>Have a track record of turning ambiguous requirements into clear, executable system designs.</li>
<li>Are proactive in identifying gaps and driving solutions across organizational boundaries.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience defining rack- or cluster-level systems for hyperscale or AI workloads.</li>
<li>Familiarity with accelerators (GPUs/ASICs), interconnects, and data center networking architectures.</li>
<li>Experience influencing vendor roadmaps and reference designs.</li>
<li>Background in infrastructure deployment, hardware engineering, or systems integration.</li>
<li>Experience leading PoCs or early-stage hardware validation efforts.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$342K – $555K</Salaryrange>
      <Skills>system architecture, large-scale infrastructure, data center environments, AI workload characteristics, performance modeling, hardware vendors, cross-functional leadership, hyperscale or AI workloads, accelerators, interconnects, data center networking architectures, vendor roadmaps, reference designs, infrastructure deployment, hardware engineering, systems integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e2afdede-a222-4825-b2fc-fec439a7c893</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>24d97e05-f68</externalid>
      <Title>Workload Porting &amp; Performance Engineer</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI&#39;s Infrastructure organization builds and evaluates the systems that power advanced AI workloads. We work closely with hardware, modeling, and architecture teams to ensure that new platforms deliver real-world performance aligned with workload needs.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a Workload Porting &amp; Performance Engineer to evaluate new hardware platforms by porting benchmarks and real-world workloads, analyzing performance, and identifying system bottlenecks.</p>
<p>In this role, you will bring up workloads on new systems, characterize performance behavior, and adapt workloads to better utilize hardware capabilities. You will play a critical role in validating new platforms and ensuring that performance aligns with expectations across compute, memory, and networking subsystems.</p>
<p>This role requires strong hands-on experience with performance analysis, workload optimization, and system-level debugging across hardware and software boundaries.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Port and enable benchmarks and real-world workloads on new hardware platforms.</li>
</ul>
<ul>
<li>Evaluate system performance across compute, memory, storage, and networking subsystems.</li>
</ul>
<ul>
<li>Identify and analyze performance bottlenecks and inefficiencies.</li>
</ul>
<ul>
<li>Adapt and optimize workloads to better utilize hardware capabilities.</li>
</ul>
<ul>
<li>Develop and run performance experiments and profiling workflows.</li>
</ul>
<ul>
<li>Compare expected vs. observed performance and provide feedback to:</li>
</ul>
<ul>
<li>hardware architecture teams</li>
</ul>
<ul>
<li>performance modeling teams</li>
</ul>
<ul>
<li>system and software engineers.</li>
</ul>
<ul>
<li>Debug issues across the stack, including software, runtime, and hardware interactions.</li>
</ul>
<ul>
<li>Provide actionable insights to guide platform readiness and deployment decisions.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Experience with performance analysis, benchmarking, or workload optimization.</li>
</ul>
<ul>
<li>Strong understanding of system architecture, including CPU/GPU, memory, and I/O subsystems.</li>
</ul>
<ul>
<li>Experience porting or adapting workloads across different hardware platforms.</li>
</ul>
<ul>
<li>Familiarity with profiling tools and performance debugging techniques.</li>
</ul>
<ul>
<li>Ability to identify root causes of performance issues across hardware/software boundaries.</li>
</ul>
<ul>
<li>Experience working in large-scale or distributed system environments.</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Experience with AI/ML workloads, including training or inference systems.</li>
</ul>
<ul>
<li>Familiarity with GPU or accelerator-based systems.</li>
</ul>
<ul>
<li>Experience working with low-level performance tools (profilers, tracing, microbenchmarks).</li>
</ul>
<ul>
<li>Background in systems software, compilers, or runtime optimization.</li>
</ul>
<ul>
<li>Experience collaborating with hardware and architecture teams on performance validation.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$342K – $555K</Salaryrange>
      <Skills>performance analysis, benchmarking, workload optimization, system architecture, CPU/GPU, memory, I/O subsystems, profiling tools, performance debugging techniques, large-scale or distributed system environments, AI/ML workloads, GPU or accelerator-based systems, low-level performance tools, systems software, compilers, runtime optimization, hardware and architecture teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/ec0a4e03-bbcc-4c64-813f-b53dabb8f53a</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c7f352b8-62c</externalid>
      <Title>Performance Modeling Engineer</Title>
      <Description><![CDATA[<p>We are seeking Performance Modeling Engineers to develop and apply modeling tools that evaluate AI system performance and inform architectural decisions.</p>
<p>In this role, you will work closely with the Performance Modeling Lead and partner teams to analyze system behavior, run simulations or analytical models, and help quantify tradeoffs across compute, memory, networking, and storage. You will contribute to building modeling frameworks and applying them to real-world questions that impact system design and vendor decisions.</p>
<p>This role is well-suited for engineers with strong software or modeling backgrounds who are interested in developing deeper expertise in system architecture and AI infrastructure.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Develop and maintain performance modeling tools and frameworks.</li>
</ul>
<ul>
<li>Build models to evaluate system behavior across:</li>
<li>compute, memory, and interconnect subsystems</li>
<li>distributed system scaling and bottlenecks.</li>
</ul>
<ul>
<li>Run simulations and analytical models to support architectural tradeoff analysis.</li>
</ul>
<ul>
<li>Collaborate with performance modeling lead and system architects to answer forward-looking design questions.</li>
</ul>
<ul>
<li>Analyze and interpret modeling outputs, translating results into actionable insights.</li>
</ul>
<ul>
<li>Validate models against real system measurements and workload behavior.</li>
</ul>
<ul>
<li>Contribute to improving modeling fidelity, usability, and scalability.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Strong software engineering or modeling background (e.g., simulation, systems modeling, or performance analysis).</li>
</ul>
<ul>
<li>Familiarity with system architecture fundamentals (compute, memory, networking).</li>
</ul>
<ul>
<li>Experience with programming and building technical tools or frameworks.</li>
</ul>
<ul>
<li>Ability to reason about performance bottlenecks and scaling behavior.</li>
</ul>
<ul>
<li>Strong analytical skills and comfort working with quantitative models.</li>
</ul>
<ul>
<li>Ability to collaborate across teams and learn new system domains quickly.</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Exposure to AI/ML workloads or distributed systems.</li>
</ul>
<ul>
<li>Experience with simulation tools, performance modeling, or systems analysis.</li>
</ul>
<ul>
<li>Familiarity with data center infrastructure or large-scale systems.</li>
</ul>
<ul>
<li>Experience working with performance data, benchmarking, or profiling tools.</li>
</ul>
<ul>
<li>Interest in system architecture and hardware/software co-design.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$266K – $445K</Salaryrange>
      <Skills>performance modeling, system architecture, AI infrastructure, software engineering, modeling, simulation, systems modeling, performance analysis, programming, technical tools, frameworks, AI/ML workloads, distributed systems, simulation tools, systems analysis, data center infrastructure, large-scale systems, performance data, benchmarking, profiling tools, hardware/software co-design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/19fc3e36-3bf3-4a7c-b65f-498d89220436</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>74367c0d-65f</externalid>
      <Title>Silicon Implementation Engineer, Front End</Title>
      <Description><![CDATA[<p>We are seeking a highly capable Implementation Engineer &amp; Technologist to drive silicon construction and optimization for next-generation AI chips. This is a senior hands-on individual-contributor role for an engineer who combines strong technical breadth with the ability to go deep quickly, solve hard problems, and land results in collaboration with cross-functional teams.</p>
<p>You will operate across architecture, circuits, memory, RTL, physical implementation, and integration technologies to turn ambitious product goals into manufacturable silicon. This role is not limited to analysis or pathfinding,you will be expected to develop solutions, prototype ideas, drive execution, and close critical gaps.</p>
<p>The ideal candidate is a hands-on generalist with strong engineering judgment, deep circuit intuition, broad semiconductor knowledge, and a habit of using AI tools to move faster and make better decisions.</p>
<p>Key responsibilities include: Partner with architecture and system teams to translate product goals into executable silicon construction strategies. Drive hands-on optimization of power, performance, area, cost, and reliability across the silicon stack. Develop and implement solutions spanning circuits, memory, RTL, physical design, and integration. Use and build AI-driven tools, flows, and methodologies to accelerate silicon implementation. Evaluate new technologies and convert them into reliable product constructions optimized for performance, performance/TCO, and performance/W.</p>
<p>Requirements include: BS with 12+ years, MS with 10+ years, or PhD with 6+ years of relevant industry experience in chip design or implementation. Strong hands-on expertise in circuits and implementation-driven PPA optimization. Deep knowledge of semiconductor technologies including memory, advanced nodes, packaging, and 3D integration. Hands-on experience with RTL design and physical implementation through tapeout. Proven ability to work across disciplines and solve complex technical problems end-to-end. Strong use of AI tools for engineering productivity, analysis, coding, or design optimization. Excellent technical communication and collaboration skills.</p>
<p>Preferred qualifications include: Strong first-principles understanding of AI chip architectures and training/inference workloads. Experience improving silicon products through innovations in performance, power, cost, yield, or reliability. Experience with HBM, SRAM, memory hierarchy design, or memory-centric optimization. Experience building internal tools, models, or automation used by engineering teams. Research lab experience and/or PhD in Electrical Engineering, Computer Engineering, Computer Science, or related field.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$266K – $445K</Salaryrange>
      <Skills>semiconductor technologies, memory, advanced nodes, packaging, 3D integration, RTL design, physical implementation, AI tools, engineering productivity, analysis, coding, design optimization, AI chip architectures, training/inference workloads, HBM, SRAM, memory hierarchy design, memory-centric optimization, internal tools, models, automation, research lab experience, PhD in Electrical Engineering, Computer Engineering, Computer Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company developing AI systems. It has a significant presence in the tech industry.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/497daf98-1fa6-45aa-ba67-bc79207cd75f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0f00522c-1ea</externalid>
      <Title>Inference Technical Lead, On-Device Transformers</Title>
      <Description><![CDATA[<p>Job Title: Inference Technical Lead, On-Device Transformers</p>
<p>Location: San Francisco</p>
<p>Department: Consumer Products</p>
<p>Job Type: Full time</p>
<p>Workplace Type: Hybrid</p>
<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Future of Computing Research team is an applied research team in the Consumer Devices group focused on developing new methods and models to support our vision as we advance forward in our mission of building AGI that benefits all of humanity.</p>
<p><strong>About the Role</strong></p>
<p>As a Technical Lead on the Future of Computing Research team, you will work together with both the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.</p>
<p><strong>This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Evaluate and select silicon platforms (GPUs, NPUs, and specialized accelerators) for on-device and edge deployment of OpenAI models.</li>
</ul>
<ul>
<li>Work closely with research teams to co-design model architectures that meet real-world deployment constraints such as latency, memory, power, and bandwidth.</li>
</ul>
<ul>
<li>Analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities.</li>
</ul>
<ul>
<li>Partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads.</li>
</ul>
<ul>
<li>Build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems.</li>
</ul>
<ul>
<li>Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators.</li>
</ul>
<ul>
<li>Understand the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements.</li>
</ul>
<ul>
<li>Have designed or optimized high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines.</li>
</ul>
<ul>
<li>Have experience building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes.</li>
</ul>
<ul>
<li>Have already spent time in the weeds teaching models to speak and perceive.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Salary</strong></p>
<p>Compensation Range: $445K</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$445K</Salaryrange>
      <Skills>Experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators, Understanding the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements, Designing or optimizing high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines, Building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes, Teaching models to speak and perceive</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company. It pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a653b035-a866-4a5c-9c2a-fda3c2950eee</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>64780097-d2c</externalid>
      <Title>Software Engineer, Backend</Title>
      <Description><![CDATA[<p>You&#39;ll build and scale the backend systems that power millions of users creating content every day on Gamma. This role is about solving real distributed systems challenges at scale while maintaining the performance and reliability users expect from a modern AI-powered product. You&#39;ll work across the full stack, shipping features that directly impact how people create and share their ideas.</p>
<p>While this role is backend focused, you&#39;ll work across the entire product with our frontend, product, and design teams. Our full TypeScript stack is built on modern technologies including React, Node.js, PostgreSQL, Redis, and cutting-edge AI models.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Scale backend systems to hundreds of millions of users while maintaining high performance and availability</li>
<li>Build and optimize APIs that power real-time collaborative editing and AI content generation</li>
<li>Design and implement distributed systems that handle massive scale with reliability</li>
<li>Ship features across the full stack, working closely with frontend engineers to deliver polished experiences</li>
<li>Architect solutions for complex technical challenges in areas like data consistency, caching, and query optimization</li>
<li>Collaborate with product and design to turn ideas into production-ready features</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>3+ years building production backend systems with strong fundamentals in distributed systems, databases, and API design</li>
<li>Deep proficiency in TypeScript/Node.js or similar backend languages, with eagerness to work in our TypeScript stack</li>
<li>Experience scaling systems to handle millions of users and high throughput workloads</li>
<li>Strong understanding of PostgreSQL, Redis, or similar database technologies</li>
<li>Passion for building APIs, scaling complex systems, and creating excellent web applications</li>
<li>Curiosity and attitude that matches your technical knowledge</li>
<li>Prior experience working with websockets, streaming, or scaling inference workloads (Nice to have)</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $180K - $275K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $275K</Salaryrange>
      <Skills>TypeScript, Node.js, PostgreSQL, Redis, API design, Distributed systems, Database design, Websockets, Streaming, Inference workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a modern AI-powered product with millions of users creating content every day.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/fb12356a-e868-4a4a-801c-882a6b0ac83f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>002c5e0f-f56</externalid>
      <Title>Member of Technical Staff, Software Co-Design AI HPC Systems</Title>
      <Description><![CDATA[<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost.</p>
<p>We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures.</p>
<p>This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale.</p>
<p>In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>
<p>About the Team</p>
<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>
<p>Microsoft Superintelligence Team</p>
<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence,ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society,advancing science, education, and global well-being.</p>
<p>We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in,come and join us as we work on our next generation of models!</p>
<p>Responsibilities</p>
<p>Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks.</p>
<p>Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements.</p>
<p>Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems.</p>
<p>Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps.</p>
<p>Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations.</p>
<p>Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs.</p>
<p>Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams.</p>
<p>Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>
<p>Qualifications</p>
<p>Minimum Qualifications</p>
<p>Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or a related technical field, or equivalent practical experience.</p>
<p>10+ years of experience (or equivalent depth) working across systems software, hardware architecture, or AI infrastructure , with demonstrated impact at scale.</p>
<p>Strong background in one or more of the following areas: AI accelerator or GPU architectures Distributed systems and large-scale AI training/inference High-performance computing (HPC) and collective communications ML systems, runtimes, or compilers Performance modeling, benchmarking, and systems analysis Hardware–software co-design for AI workloads Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development.</p>
<p>Proven ability to work across organizational boundaries and influence technical decisions involving multiple stakeholders.</p>
<p>Preferred Qualifications</p>
<p>Experience designing or operating large-scale AI clusters for training or inference.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-5/</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>54d50a6e-0f0</externalid>
      <Title>Member of Technical Staff, AI Systems Engineer</Title>
      <Description><![CDATA[<p>We are building next-generation customized AI silicon designed to accelerate AI workloads with unprecedented efficiency. We are looking for an exceptional Systems Engineer to bridge the gap between our custom hardware and modern AI inference frameworks.</p>
<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>
<p>The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence,ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control.</p>
<p>As a Senior AI Systems Engineer, you will own the software integration layer between our custom AI chip&#39;s proprietary SDK and SGLang, a state-of-the-art serving framework for Large Language Models (LLMs) and Vision-Language Models. You will be responsible for ensuring that our silicon can seamlessly run SGLang inference workloads at peak performance, bypassing the traditional CUDA ecosystem entirely.</p>
<p>Responsibilities:</p>
<ul>
<li>Framework Integration: Architect and develop the backend integration to make our custom AI chip a first-class citizen in SGLang.</li>
<li>Custom Operator Development: Write custom C++ / PyTorch extensions that map SGLang&#39;s primitive operations (e.g., RadixAttention, FlashAttention, matrix multiplications) to our custom chip&#39;s proprietary software layer.</li>
<li>Performance Optimization: Profile and optimize end-to-end LLM inference latency, throughput, and memory utilization (Paged Attention) on our hardware.</li>
<li>Cross-Functional Collaboration: Work closely with our hardware architecture and compiler teams to provide feedback on our custom software stack and silicon design based on framework-level bottlenecks.</li>
<li>Testing &amp; Deployment: Build robust testing pipelines to validate model accuracy and performance parity against standard GPU baselines.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Must-Have Qualifications: BS, MS, or PhD in Computer Science, Computer Engineering, or a related field. Software engineering experience focusing on systems programming, ML infrastructure, or AI compilers. Expertise in Python: Deep understanding of memory management, concurrent programming. Experience with LLM Inference Engines: Hands-on experience modifying or extending frameworks like SGLang, vLLM, DeepSpeed-FastGen, or TensorRT-LLM. PyTorch Internals: Strong experience writing PyTorch C++ extensions and custom operators. Hardware Interfacing: Proven track record of integrating machine learning workloads with hardware accelerators (GPUs, TPUs, NPUs) using custom SDKs, APIs, or low-level drivers.</li>
</ul>
<ul>
<li>Nice-to-Have Qualifications: Prior experience working on non-CUDA software ecosystems (e.g., AMD ROCm, AWS Neuron, Google XLA). Familiarity with AI compilers and intermediate representations (MLIR, Apache TVM, OpenAI Triton). Strong understanding of underlying LLM architectures (Transformers, MoE) and state-of-the-art attention algorithms (FlashAttention v2/v3). Previous experience at an AI silicon startup or working on custom accelerators (e.g., Google TPU, AWS Trainium).</li>
</ul>
<p>This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, C++, LLM Inference Engines, SGLang, vLLM, DeepSpeed-FastGen, TensorRT-LLM, Hardware Interfacing, Machine Learning Workloads, Custom SDKs, APIs, Low-Level Drivers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft Superintelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft Superintelligence is a team within Microsoft AI focused on developing next-generation customized AI silicon designed to accelerate AI workloads with unprecedented efficiency.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-ai-systems-engineer-microsoft-superintelligence-3/</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>792cef6b-cf8</externalid>
      <Title>Transaction Principal</Title>
      <Description><![CDATA[<p>As a Transaction Principal for Australia at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our Australian data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems in the region , you&#39;ll bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams, and partner closely with our Compute Markets team who own the Australia market strategy and government relationships. This is not an established leasing org; you&#39;ll be building process alongside execution.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the RFP and commercial sourcing process for Australian data center deals, managing developer outreach, proposal evaluation, and competitive selection</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement , coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing status</li>
<li>Develop and maintain transaction timelines, tracking critical-path items and proactively identifying risks that could impact deal closure</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
<li>Manage complex digital infrastructure development activities to a construction-ready state, through a developer or directly</li>
<li>Marry the right projects, capital stacks, and developers at the right stages</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint in region</li>
<li>Partner with the Compute Markets Manager to prioritize sites and counterparties, and feed deal learnings back into Australia market strategy</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Have experience working in or with Australian markets, with knowledge of the local real estate and development landscape</li>
<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Demonstrate exceptional communication skills and can coordinate effectively across time zones with HQ-based teams and external partners</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>
<li>Come from the development side of the industry rather than traditional brokerage/leasing , you understand how DC development works and how value is created (yield-on-cost, cap rates, development fees)</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>
<li>Understand utility coordination, power procurement, or energy considerations in data center transactions, particularly in the Australian context (NEM, grid connection)</li>
<li>Have relationships within the Australian data center developer and broker ecosystem</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>transaction management, commercial real estate, data center leasing, infrastructure procurement, negotiation, project management, RFP processes, competitive sourcing, Australian markets, local real estate and development landscape, communication skills, data center or hyperscale infrastructure transactions, DC development, yield-on-cost, cap rates, development fees, technical requirements for AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations, Australian data center developer and broker ecosystem, corporate development, strategic partnerships, infrastructure investment, high-growth technology companies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5154345008</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f05d190-fce</externalid>
      <Title>Sr. Manager, Field Engineering - Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>A key responsibility of this role is to hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers.</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle.</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships.</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team.</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning).</li>
</ul>
<ul>
<li>5+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment.</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets.</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business.</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts.</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, technical product, digital native customers, data, analytical, and AI workloads, Solutions Architects, customer-facing teams, hiring, onboarding, and supporting team members, high-growth environment, executive alignment, accounts that guide strategic decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8496009002</Applyto>
      <Location>Colorado; Remote - California; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9af8d812-df8</externalid>
      <Title>AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>
<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>
<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>
<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>
<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>
<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>
<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>
<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>
<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>
<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>
<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>
<p>You have hands-on experience with one or more of the following:</p>
<p>Model training (especially transformers and LLMs).</p>
<p>Model inference at scale (again, especially transformers and LLMs).</p>
<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>
<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>
<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>
<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>
<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary, annual bonus and equity</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>
<p>Generous paid time off above statutory minimum</p>
<p>Hybrid working</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p>Fun events for employees, friends, and family!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI company that builds customer service solutions. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7824142</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>588dfb0e-611</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>
<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>
<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4557835006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce19e8c0-163</externalid>
      <Title>Transaction Manager</Title>
      <Description><![CDATA[<p>As a Transaction Manager at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems, requiring you to bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Help identify data center capacity opportunities and options through management of network relationships across data center developer, broker, and power contacts.</li>
<li>Lead the RFP and commercial sourcing process for specific data center deals, managing developer outreach, proposal evaluation, and competitive selection processes</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement, coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance organization to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact (SPOC) for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing process status</li>
<li>Develop and maintain transaction timelines, tracking critical path items and proactively identifying risks that could impact deal closure</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Demonstrate exceptional communication skills, able to serve as an effective liaison between internal stakeholders and external partners</li>
<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Have a collaborative mindset and can build trust with diverse stakeholder groups across the organization</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>
<li>Possess familiarity with data center developer ecosystems and market dynamics</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
<li>Understand utility coordination, power procurement, or energy considerations in data center transactions</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
</ul>
<p>The annual compensation range for this role is $365,000-$435,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>transaction management, commercial real estate, data center leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, term sheets, LOIs, commercial agreements, data center or hyperscale infrastructure transactions, AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5099080008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>24176cb8-311</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>
<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>
<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>
<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>
<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e68e5c3b-1e2</externalid>
      <Title>Lakebase Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>
<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>
<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>
<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>
<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>
<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>
<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>
<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>
<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>
<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>
<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>
<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>
<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>
<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>
<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>
<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>
<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>
<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>
<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>
<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>
<p>Bachelor’s degree or equivalent practical experience.</p>
<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>
<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>
<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>
<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>
<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>
<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8449848002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>96d05ee1-799</externalid>
      <Title>Staff Software Engineer, Cluster Orchestration</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence.</p>
<p>Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.</p>
<p>Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.</p>
<p><strong>About the Role</strong></p>
<p>As part of the Cluster Orchestration team, you will play a key role in advancing CoreWeave&#39;s orchestration platform including SUNK (Slurm on Kubernetes) and beyond, our Kubernetes-native foundation that powers AI training and inference at scale.</p>
<p>This is an opportunity to help shape one of the most critical layers of the AI cloud: ensuring workloads run seamlessly, reliably, and efficiently across massive GPU clusters.</p>
<p>By building the systems that eliminate infrastructure bottlenecks and create new orchestration capabilities, you will directly empower customers to innovate faster and push the boundaries of what&#39;s possible with AI.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>As a Staff Engineer, you will be a technical leader shaping the long-term strategy for CoreWeave&#39;s orchestration platform.</p>
<p>You&#39;ll define architectural direction, own critical parts of the orchestration platform and other managed services, and drive cross-org initiatives in scheduling, quota enforcement, and scaling at hyperscale.</p>
<p>You&#39;ll mentor senior engineers, establish org-wide best practices in reliability and observability, and ensure CoreWeave&#39;s orchestration layer evolves to meet the demands of next-generation AI workloads.</p>
<p><strong>Who You Are</strong></p>
<ul>
<li>8+ years of software engineering experience.</li>
</ul>
<ul>
<li>Proven track record designing and operating large-scale distributed systems in production.</li>
</ul>
<ul>
<li>Deep expertise in Slurm/Kubernetes internals and cloud-native development.</li>
</ul>
<ul>
<li>Advanced proficiency in Go and distributed systems design and cloud-native development.</li>
</ul>
<ul>
<li>Experience setting technical direction and influencing cross-team architecture.</li>
</ul>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in CS, EE, or related field.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Familiarity with orchestration and workflow technologies such as Ray, Kubeflow, Kueue, Istio, Knative, or Argo Workflows</li>
</ul>
<ul>
<li>Deep expertise in Slurm/Kubernetes internals.</li>
</ul>
<ul>
<li>Experience with distributed workloads, GPU-based applications, or ML pipelines.</li>
</ul>
<ul>
<li>Knowledge of scheduling concepts like quota enforcement, pre-emption, and scaling strategies.</li>
</ul>
<ul>
<li>Exposure to reliability practices including SLOs, alarms, and post-incident reviews.</li>
</ul>
<ul>
<li>Experience with AI infrastructure and workloads (ML training, inference, or HPC).</li>
</ul>
<ul>
<li>Ability to mentor senior engineers and elevate organizational standards.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p><strong>Salary and Benefits</strong></p>
<p>The base salary range for this role is $185,000 to $275,000.</p>
<p>The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p>We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>What We Offer</strong></p>
<p>The range we&#39;ve posted represents the typical compensation range for this role.</p>
<p>To determine actual compensation, we review the market rate for each candidate which can include a variety of factors.</p>
<p>These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000 to $275,000</Salaryrange>
      <Skills>software engineering, distributed systems, Slurm, Kubernetes, cloud-native development, Go, scheduling, quota enforcement, scaling strategies, reliability practices, SLOs, alarms, post-incident reviews, AI infrastructure, workloads, ML training, inference, HPC, orchestration and workflow technologies, Ray, Kubeflow, Kueue, Istio, Knative, Argo Workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658801006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6421dea-6e3</externalid>
      <Title>Strategic Hunter Account Executive - Lakebase</Title>
      <Description><![CDATA[<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>
<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>
<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>
<p><strong>The impact you will have</strong></p>
<ul>
<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>
</ul>
<ul>
<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>
</ul>
<ul>
<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>
</ul>
<ul>
<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>
</ul>
<ul>
<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>
</ul>
<ul>
<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>
</ul>
<ul>
<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>
</ul>
<ul>
<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>
</ul>
<ul>
<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>
</ul>
<p><strong>What success looks like in this role</strong></p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<ul>
<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>
</ul>
<ul>
<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>
</ul>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<ul>
<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>
</ul>
<ul>
<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>
</ul>
<ul>
<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>
</ul>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p><strong>What we look for</strong></p>
<ul>
<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>
</ul>
<ul>
<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>
</ul>
<ul>
<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>
</ul>
<ul>
<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>
</ul>
<ul>
<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>
</ul>
<ul>
<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>
</ul>
<ul>
<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>
</ul>
<ul>
<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>
</ul>
<ul>
<li>Bachelor’s degree or equivalent practical experience.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>
</ul>
<ul>
<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>
</ul>
<ul>
<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>
</ul>
<ul>
<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>
</ul>
<ul>
<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>
</ul>
<ul>
<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p><strong>Our Commitment to Diversity and Inclusion</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477547002</Applyto>
      <Location>Bengaluru, India; Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a7d182d-c49</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your primary responsibility will be to serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a proven track record of working as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will also need to have fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>In addition, you will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $225,000 SGD</Salaryrange>
      <Skills>Cloud computing concepts, Kubernetes solutions, High-performance compute (HPC) environments, Distributed systems, Cloud infrastructure, Code contributions to open-source inference frameworks, Scripting and automation related to Kubernetes clusters and workloads, Building solutions across multi-cloud environments, Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649036006</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2ab9c635-07a</externalid>
      <Title>Operations Engineer, Fleet Reliability</Title>
      <Description><![CDATA[<p>The Fleet Reliability Operations team is responsible for the day-to-day provisioning, management, and uptime of CoreWeave&#39;s ever-expanding fleet of server nodes. This team plays a central role in CoreWeave&#39;s growth strategy, configuring, updating, and remotely troubleshooting our highest-tier supercomputing clusters and their networking, delivery platforms, and tools dependencies.</p>
<p>We are seeking curious, creative, and persistent problem solvers to join our Fleet Reliability Operations team to help drive batches of server nodes through our provisioning and validation processes while efficiently and effectively troubleshooting node or cluster problems as they arise.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Configuring and maintaining large-scale high-performance supercomputing clusters running state-of-the-art GPUs</li>
<li>Troubleshooting hardware and software issues; escalating and coordinating as needed with data center, network, hardware, and platform teams to drive resolution</li>
<li>Monitoring and analyzing system performance and taking appropriate remediation actions for cloud health</li>
<li>Approaching work with flexibility and optimism, anticipating shifting business and technical priorities</li>
<li>Creating and maintaining documentation of team processes, knowledge, and best practices for system management</li>
<li>Thinking critically about day-to-day work and working collaboratively to improve team processes and efficiency</li>
</ul>
<p>As a member of our team, you will be part of a dynamic and fast-paced environment where you will have the opportunity to grow and develop your skills. We offer a competitive salary range of $83,000 to $110,000, as well as a comprehensive benefits package, including medical, dental, and vision insurance, company-paid life insurance, and flexible PTO.</p>
<p>If you are a motivated and detail-oriented individual who is passionate about working with cutting-edge technology, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$83,000 to $110,000</Salaryrange>
      <Skills>Linux system administration, Troubleshooting hardware and software issues, System maintenance tasks, Scripting languages (bash, python, powershell, etc), Grafana, Prometheus, promsql queries or similar observability platforms, Kubernetes administration, HPC - administering GPU-related workloads, Data center environments including server racks, HVAC systems, fiber trays</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4617382006</Applyto>
      <Location>New York, NY /Plano, TX /  Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9166d234-4c5</externalid>
      <Title>Solutions Architect - HPC/AI/ML</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers establish their Kubernetes environment, develop proofs of concept, onboard, and optimise workloads. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on AI/ML workloads within high-performance compute (HPC) environments.</p>
<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimisation and suggesting suitable solutions.</p>
<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $225,000 SGD</Salaryrange>
      <Skills>cloud computing concepts, architecture, technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes, code contributions to open-source inference frameworks, scripting and automation related to AI/ML workloads, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider specialising in artificial intelligence and machine learning workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649044006</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09c520cf-f62</externalid>
      <Title>Systems Engineer, Kernel</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>
<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>
<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>
<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>
<p>Our Team&#39;s Stack:</p>
<ul>
<li>Python, Go, bash/sh, C</li>
</ul>
<ul>
<li>Prometheus, Victoria Metrics, Grafana</li>
</ul>
<ul>
<li>Linux Kernel (custom build), Ubuntu</li>
</ul>
<ul>
<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>
</ul>
<ul>
<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>
</ul>
<p>Focus Areas:</p>
<ul>
<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>
</ul>
<ul>
<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>
</ul>
<ul>
<li>Stack-Wide Support – Ensure kernel support and stability across:</li>
</ul>
<ul>
<li>Virtualization (KubeVirt, QEMU, vFIO)</li>
</ul>
<ul>
<li>Container runtimes (containerd, nydus, kubelet)</li>
</ul>
<ul>
<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>
</ul>
<ul>
<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>
</ul>
<ul>
<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>
</ul>
<p>About the role:</p>
<ul>
<li>Triage and fix kernel crashes and performance regressions.</li>
</ul>
<ul>
<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>
</ul>
<ul>
<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>
</ul>
<ul>
<li>Implement diagnostics and tooling for kernel-level observability.</li>
</ul>
<ul>
<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>
</ul>
<ul>
<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>
</ul>
<ul>
<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>
</ul>
<ul>
<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>
</ul>
<ul>
<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>
</ul>
<ul>
<li>Experience working with kernel modules, drivers, and subsystems.</li>
</ul>
<ul>
<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Contributions to the Linux kernel or related open-source projects.</li>
</ul>
<ul>
<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>
</ul>
<ul>
<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>
</ul>
<ul>
<li>GPU/DPU bring-up and driver experience.</li>
</ul>
<ul>
<li>Experience in HPC or large-scale distributed systems.</li>
</ul>
<ul>
<li>Familiarity with QA/QE best practices</li>
</ul>
<ul>
<li>Experience working in Cloud environments</li>
</ul>
<ul>
<li>Experience as a software engineer writing large-scale applications</li>
</ul>
<ul>
<li>Experience with machine learning is a huge bonus</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance &amp; stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4599319006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>372999e8-579</externalid>
      <Title>Senior Software Engineer II, AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer II on the AI Workload Orchestration team, you will help build and operate CoreWeave&#39;s Kubernetes-native platform for admitting, scheduling, and operating AI workloads at scale.</p>
<p>This platform integrates multiple orchestration and scheduling frameworks such as Kueue, Volcano, and Ray to support modern AI training and inference workflows. It complements SUNK (Slurm on Kubernetes) by providing a Kubernetes-first, cloud-native orchestration layer with deep platform integration.</p>
<p>You will own meaningful components of the platform, drive reliability and performance improvements, and help scale the system as customer demand and workload complexity continue to grow.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and operate Kubernetes-native services for AI workload orchestration and scheduling</li>
<li>Own one or more platform components end-to-end, including design, implementation, testing, and on-call support</li>
<li>Improve scheduling latency, cluster utilization, and workload reliability through metrics-driven engineering</li>
<li>Contribute to architectural discussions across services and influence design decisions within the platform</li>
<li>Work closely with adjacent teams (CKS, infrastructure, managed inference) to ensure clean interfaces and integrations</li>
<li>Mentor junior engineers and raise the quality bar for code, design, and operations</li>
</ul>
<p>About the role:</p>
<ul>
<li>5–8 years of professional software engineering experience in distributed systems, cloud infrastructure, or platform engineering</li>
<li>Strong experience building production systems in Go (Python or C++ a plus)</li>
<li>Solid understanding of Kubernetes fundamentals, APIs, controllers, and operating services in production</li>
<li>Experience working with scheduling, resource management, or quota-based systems</li>
<li>Proven ability to improve system reliability and performance using data and operational metrics</li>
<li>Comfortable owning services in production and participating in on-call rotations</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience with Kubernetes-native orchestration frameworks such as Kueue, Volcano, Ray, Kubeflow, or Argo Workflows</li>
<li>Familiarity with GPU-based workloads, ML training, or inference pipelines</li>
<li>Knowledge of scheduling concepts such as quota enforcement, pre-emption, and backfilling</li>
<li>Experience with reliability practices including SLOs, alerting, and incident response</li>
<li>Exposure to AI infrastructure, HPC, or large-scale distributed compute environments</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Go, Distributed systems, Cloud infrastructure, Platform engineering, Scheduling, Resource management, Quota-based systems, Kueue, Volcano, Ray, Kubeflow, Argo Workflows, GPU-based workloads, ML training, Inference pipelines, SLOs, Alerting, Incident response, AI infrastructure, HPC, Large-scale distributed compute environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647595006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3c6419c4-a9b</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimization and scaling distributed systems</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95061695-858</externalid>
      <Title>Director of Engineering, Media &amp; Entertainment (M&amp;E)</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Director of Engineering, Media &amp; Entertainment (M&amp;E) to lead the development of next-generation cloud platforms and tools that power modern content creation workflows. This role will drive the engineering strategy and execution for solutions that support visual effects (VFX), animation, rendering, and post-production pipelines used by studios, artists, and creative teams worldwide.</p>
<p>As a senior engineering leader, you will build and lead high-performing engineering teams responsible for designing scalable infrastructure, developer tools, and user-facing systems that enable creative professionals to run complex production workloads in the cloud. You will collaborate closely with product, design, infrastructure, and customer teams to translate real-world production workflows into reliable, high-performance software platforms.</p>
<p>This role combines deep engineering leadership with domain expertise in M&amp;E workflows, ensuring that the platform delivers exceptional performance, reliability, and usability for demanding creative workloads.</p>
<p><strong>Leadership &amp; Strategy</strong></p>
<p>-Build and scale high-performing engineering teams focused on cloud platforms for media production workloads including rendering, simulation, and content processing. -Recruit, mentor, and develop engineering managers and senior engineers while fostering a culture of innovation, accountability, and collaboration. -Define and execute the long-term engineering strategy for Media &amp; Entertainment products and services. -Partner with Product and Design leaders to translate industry workflows and customer needs into scalable platform capabilities. -Establish engineering best practices for reliability, security, observability, and operational excellence. -Drive roadmap alignment between engineering initiatives and strategic business objectives.</p>
<p><strong>Technical Leadership</strong></p>
<p>-Lead the design and development of scalable backend services, APIs, and developer interfaces that power M&amp;E cloud workflows. -Build platforms that support demanding workloads such as rendering, asset processing, and distributed compute pipelines. -Drive architecture decisions for cloud-native systems leveraging technologies such as Kubernetes, distributed services, and infrastructure-as-code. -Ensure the platform enables self-service provisioning, automation, and repeatable workflows for production pipelines. -Establish engineering standards around performance, scalability, and security for enterprise-grade SaaS/PaaS systems. -Oversee system reliability and operational readiness through clear SLOs, monitoring, and runbook-driven on-call practices.</p>
<p><strong>Product &amp; Workflow Collaboration</strong></p>
<p>-Work closely with product leadership to define technical requirements aligned with real customer workflows in animation, VFX, and media production. -Engage directly with studios, artists, and technical directors to understand pipeline challenges and incorporate feedback into product development. -Translate industry needs into clear engineering priorities and technical roadmaps. -Guide development teams through product milestones including specification, development, testing, and release. -Ensure engineering efforts balance customer requirements, technical feasibility, and business goals.</p>
<p>Customer and industry collaboration is critical in identifying workflow needs and transforming them into actionable development plans for engineering teams.</p>
<p><strong>Operational Excellence</strong></p>
<p>-Implement engineering processes that support scalable development, including CI/CD pipelines, testing strategies, and code review standards. -Manage development timelines and resource allocation across multiple engineering teams. -Track key operational and customer metrics including performance, reliability, and cost efficiency. -Drive continuous improvement in engineering productivity and system performance. -Partner with QA, support, and customer success teams to ensure high-quality releases and strong user satisfaction.</p>
<p><strong>Who You Are:</strong></p>
<p><strong>Required Qualifications</strong></p>
<p>-10+ years of software engineering experience, including leadership of engineering teams and managers -Proven experience building and scaling cloud-based platforms or distributed systems. -Strong understanding of cloud infrastructure, microservices architecture, and automation technologies. -Experience delivering enterprise SaaS or PaaS products used by external customers. -Excellent leadership, communication, and cross-functional collaboration skills. -Ability to operate strategically while remaining deeply technical and hands-on with architecture decisions.</p>
<p><strong>Preferred Qualifications</strong></p>
<p>-Experience building platforms or tools for Media &amp; Entertainment workflows such as VFX, animation, rendering, or post-production pipelines. -Familiarity with industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan. -Experience designing APIs, developer platforms, or automation frameworks used by technical users. -Knowledge of GPU-accelerated compute workloads and distributed rendering systems. -Experience working with Kubernetes, infrastructure-as-code, and large-scale cloud environments.</p>
<p><strong>What Success Looks Like</strong></p>
<p>-Engineering teams delivering reliable, scalable platforms used by media studios and creative teams globally. -Clear alignment between product vision, customer workflows, and engineering execution. -Platforms capable of supporting large-scale production workloads with high performance and reliability. -Strong engineering culture focused on innovation, collaboration, and operational excellence.</p>
<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match.</p>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<p>-Be Curious at Your Core -Act Like an Owner -Empower Employees -Deliver Best-in-Class Client Experiences -Achieve More Together</p>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$206,000 - $303,000</Salaryrange>
      <Skills>Cloud infrastructure, Microservices architecture, Automation technologies, Enterprise SaaS or PaaS products, Leadership, Communication, Cross-functional collaboration, Strategic decision-making, Media &amp; Entertainment workflows, VFX, animation, rendering, or post-production pipelines, Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan, APIs, developer platforms, or automation frameworks, GPU-accelerated compute workloads and distributed rendering systems, Kubernetes, infrastructure-as-code, and large-scale cloud environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) and machine learning (ML) workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4666156006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b71a8e89-5f0</externalid>
      <Title>Multinational Digital Infrastructure - Senior Cloud Engineer</Title>
      <Description><![CDATA[<p>Anduril Industries is seeking a Senior Cloud Engineer to join its Multinational Digital Infrastructure team. As a Senior Cloud Engineer, you will design and implement cloud environments that enable Anduril to effectively operate sovereign programmes in the U.K. and Australia, as well as expanding to other nations as Anduril&#39;s global presence increases.</p>
<p>You will work across engineering, security, and product teams to ensure our digital infrastructure is secure, scalable, and ready to support emerging mission demands.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, deploy, and maintain enterprise cloud landing zones, security and infrastructure tooling.</li>
<li>Collaborate with teams across the U.S. and Australia to enable secure connectivity between other sovereign cloud environments.</li>
<li>Partner with government customers, authorizing officials (AOs), cybersecurity teams, and policy shops to accelerate accreditation, break through legacy barriers, and unlock access for cross-nation engineering teams.</li>
<li>Implement infrastructure automation (IaC), observability tooling, and secure configuration baselines to support scalable, repeatable environment builds.</li>
<li>Work closely with product, autonomy, Lattice, and Maritime engineering teams to integrate infrastructure capabilities with platform development, testing, and deployment workflows.</li>
<li>Act as a technical leader during environment standup, troubleshooting, and validation events; ensure classified systems perform reliably in support of mission-critical needs.</li>
<li>Support development of next-generation secure architectures for multinational development, data sharing, and mission system integration across Maritime platforms.</li>
<li>Serve as a technical representative during customer events, exercises, and operational demonstrations to ensure infrastructure readiness and mission success.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Ability to obtain and maintain a UK security clearance to SC level.</li>
<li>Bachelor&#39;s degree in a STEM field or equivalent engineering experience.</li>
<li>Technical depth in one or more areas, including cloud infrastructure, secure networking, systems engineering, DevSecOps, platform architecture, cybersecurity, identity &amp; access management.</li>
<li>Specific technology includes: cloud - AWS, Azure; infrastructure as code - Terraform, CloudFormation; SCM - GitHub Enterprise; CI/CD - CircleCI, Gitlab; IDAM + SSO - Okta, AWS Identity Center.</li>
<li>8+ years of relevant engineering, infrastructure, or technical program execution experience.</li>
<li>Willingness to travel domestically and internationally as required.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with secure systems engineering, ideally within UK Government or Defence.</li>
<li>Experience provisioning large enterprise cloud platforms for hundreds or thousands of users.</li>
<li>Experience designing or maintaining distributed systems, secure networks, or infrastructure supporting autonomy, AI/ML, or big data workloads.</li>
<li>Demonstrated ability to work across technical disciplines, influence without authority, and operate in ambiguous and fast-paced environments.</li>
<li>Experience working with international partners or navigating multi-nation technical or policy workflows.</li>
</ul>
<p>The salary range for this role is competitive and includes highly competitive equity grants as part of Anduril&#39;s total compensation package.</p>
<p>Additional benefits include:</p>
<ul>
<li>Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>Generous time off, including a holiday hiatus in December.</li>
<li>Family planning &amp; parenting support, including coverage for fertility treatments and adoption.</li>
<li>Mental health resources, including access to free therapy and life coaching.</li>
<li>Professional development opportunities, including annual reimbursement for professional development.</li>
<li>Commuter benefits and relocation assistance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure, Secure networking, Systems engineering, DevSecOps, Platform architecture, Cybersecurity, Identity &amp; access management, AWS, Azure, Terraform, CloudFormation, GitHub Enterprise, CircleCI, Gitlab, Okta, AWS Identity Center, Secure systems engineering, Provisioning large enterprise cloud platforms, Designing or maintaining distributed systems, Infrastructure supporting autonomy, AI/ML, or big data workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5039728007</Applyto>
      <Location>London, England, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f249232-d14</externalid>
      <Title>Principal Engineer, Cluster Orchestration</Title>
      <Description><![CDATA[<p>As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale.</p>
<p>You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining the long-term architecture for CoreWeave&#39;s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems.</li>
<li>Acting as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation.</li>
<li>Making design decisions that balance performance, reliability, cost, and operational complexity.</li>
</ul>
<p>In addition to these responsibilities, you will also lead the evolution of Kubernetes-native control planes, including SUNK and custom operators, and design systems that support workload admission, validation, and rollout, including model onboarding flows.</p>
<p>You will work closely with cross-functional teams to ensure that the systems you design and implement meet the needs of our customers and are scalable, reliable, and efficient.</p>
<p>If you have a passion for building large-scale distributed systems and are looking for a challenging and rewarding role, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$206,000 to $303,000</Salaryrange>
      <Skills>Kubernetes, Slurm, SUNK, Go, Cloud-native systems development, GPU-heavy platforms for AI training, inference, or HPC workloads, Kueue, Kubeflow, Argo Workflows, Ray, Istio, Knative</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) and machine learning (ML) workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658799006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>40d32156-365</externalid>
      <Title>Reliability Lead, Common Services</Title>
      <Description><![CDATA[<p>As Reliability Lead, Common Services, you will establish and lead the Reliability Engineering and production operations practice for the Common Services organization. You&#39;ll partner closely with engineering leaders and teams across Common Services to define how we build, release, monitor, and operate critical services,raising the bar on reliability, availability, and operational excellence across the board.</p>
<p>In this role, you will:</p>
<ul>
<li>Establish and lead the SRE / production engineering practice for the Common Services organization, including standards for reliability, incident management, and on-call, in partnership with the central Product Engineering organization.</li>
<li>Develop an Operational Excellence strategy that focuses on not only improving system performance but also monitoring and reducing operational toil</li>
<li>Partner with engineering and product teams to define SLOs, SLIs, and error budgets for critical Common Services, and ensure these become part of how teams plan and make tradeoffs.</li>
<li>Own and improve the incident management lifecycle for Common Services, including on-call rotations, escalation paths, incident tooling, post-incident reviews, and follow-through on corrective actions.</li>
<li>Drive the observability strategy (metrics, logs, traces, dashboards, alerts) for Common Services, ensuring we have actionable visibility into the health, performance, and capacity of key systems.</li>
<li>Collaborate with engineering leads to design and review architectures for reliability, scalability, resilience, and operability, including failure modes, redundancy, and graceful degradation.</li>
<li>Lead efforts to automate and harden operational workflows, including deployments, rollbacks, configuration management, change management, and routine maintenance tasks.</li>
<li>Build strong, trust-based relationships with partner teams and stakeholders, becoming a go-to leader for production readiness and operational risk within Common Services.</li>
<li>Hire, mentor, and develop SRE and production engineering talent, fostering a culture of continuous improvement, learning from incidents, and humane on-call.</li>
<li>Partner with other SRE and production engineering leaders across CoreWeave to align on global practices, tools, and reliability goals, representing the needs and constraints of Common Services.</li>
</ul>
<p>You will be responsible for defining the reliability strategy, processes, and standards for the Common Services portfolio and driving consistent, high-quality operational practices across multiple teams.</p>
<p>The base salary range for this role is $206,000 to $303,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$206,000 to $303,000</Salaryrange>
      <Skills>Site Reliability Engineering, Production Engineering, Linux-based production environments, Containers, Orchestration technologies, Observability stacks, Alerting systems, SLIs/SLOs, Error budgets, Incident management, On-call rotations, Escalation paths, Post-incident reviews, Corrective actions, Automation tooling, Infrastructure-as-code, CI/CD pipelines, GPU workloads, High-performance computing, Latency/throughput-sensitive systems, Multi-tenant environments, Multi-region environments, Regulated environments, Service ownership models, Mentoring, Managing senior engineers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for AI development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650165006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>87c43ead-4a1</externalid>
      <Title>Staff Site Reliability Engineer, Security- GCP</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p>Okta&#39;s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Staff Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure.</p>
<p>We encourage you to prescribe defence-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level.</p>
<p>Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance.</p>
<p>We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product.</p>
<p>This is a high-impact role in a security-centric, fast-paced organisation that is poised for massive growth and success.</p>
<p>You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap.</p>
<p>You will focus on engineering security aspects of the systems used across our services.</p>
<p>Join us and be part of a company that is about to change the cloud computing landscape forever.</p>
<p>As a Staff Engineer, you should be able to identify gaps, propose innovative solutions, and contribute to roadmaps while driving alignment across multiple teams within the organisation.</p>
<p>Additionally, you should serve as a role model, providing technical mentorship to junior team members and fostering a culture of learning and growth</p>
<p>What are we looking for?</p>
<p>We are looking for a security-first SRE engineer who doesn&#39;t just &#39;flag&#39; issues but builds the automation to solve them.</p>
<p>You should have a deep-seated intuition for cloud-native security and a proven track record of hardening large-scale GCP and AWS environments.</p>
<p>As a Technical SME, you will design and build production infrastructure with a &#39;security-at-scale&#39; mindset.</p>
<p>What You Will Work On?</p>
<p>Security Evangelism: Lead initiatives to strengthen our security posture for critical infrastructure and promote best practices across the engineering organisation.</p>
<p>Incident Response &amp; Reliability: Respond to production security incidents, perform root cause analysis, and build automated preventions to ensure high performance and reliability.</p>
<p>Automated Hardening: Identify manual security processes and automate them using custom tooling and CI/CD integrations.</p>
<p>Architecture &amp; Documentation: Develop technical documentation, runbooks, and procedures for a 24x7 online environment.</p>
<p>Platform Evolution: Continuously evolve our monitoring platforms, moving from simple auditing to active, automated prevention.</p>
<p>Minimum Required Knowledge, Skills, &amp; Abilities:</p>
<p>Experience: 8+ years of experience architecting and running complex cloud networking and infrastructure, with at least 7+ years specialised in DevSecOps or Cloud Security.</p>
<p>GCP Expertise: Minimum 3+ years of deep, hands-on experience securing GCP (GKE, GCE, Shared VPC etc).</p>
<p>Infrastructure as Code (IaC): 10+ years of experience using Terraform and Chef to manage complex cloud resources and OS hardening.</p>
<p>Automation Mastery: Expert-level proficiency in Go, Python, or Ruby for building custom security tooling and automated remediation.</p>
<p>Hardened Containers: Proven track record of securing containerised workloads, including image scanning, K8s RBAC, and runtime security tools (e.g., CrowdStrike Falcon, Falco, or gVisor).</p>
<p>Unflappable Troubleshooting: A &#39;see a problem, fix the problem&#39; mindset with the ability to debug complex networking, IAM, or performance issues under pressure.</p>
<p>Security Foundations: Strong grasp of Linux internals, OS hardening (CIS benchmarks), and IP protocols (TLS/SSL, DNSSEC, BGP).</p>
<p>Education: BS in Computer Science or equivalent professional experience.</p>
<p>Key Responsibilities:</p>
<p>IAM &amp; Secrets Management: Design and maintain large-scale production IAM policies and secrets management workflows.</p>
<p>Infrastructure Hardening: Implement and maintain Public Key Infrastructure (PKI) and ensure all GCE/GKE environments meet strict compliance standards.</p>
<p>Operational Excellence: Utilise industry-standard tools like OSQuery, Splunk, Chronicle, Nessus, or Qualys/Crowdstrike to monitor system health and security telemetry.</p>
<p>Strategic Rollouts: Lead the phased transition of security policies from Audit/Detection mode to Blocking/Prevention mode, ensuring zero impact on production uptime.</p>
<p>Bonus Points For:</p>
<p>Multi-Cloud IAM Governance: Experience designing a unified IAM framework across AWS and GCP, utilising federated Identities such as Workload, Workforce Identity Federation with understanding of SAML &amp; OIDC auth mechanism and automated &#39;Least Privilege&#39; enforcement.</p>
<p>Cloud-Native Reliability Engineering: Deep understanding of multi-cloud reliability patterns, maintaining high availability (HA) during security patching or infrastructure-wide hardening.</p>
<p>Hardened Kubernetes Orchestration: Advanced experience securing GKE, EKS, and kOps, specifically implementing Pod Security Standards, Network Policies, and Admission Controllers for a &#39;Zero-Trust&#39; posture.</p>
<p>Threat Modeling: Security Reviews &amp; Threat Modeling at both Design &amp; Implementation scope.</p>
<p>The Okta Experience - Supporting Your Well-Being - Driving Social Impact - Developing Talent and Fostering Connection + Community</p>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>
<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>
<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>
<p>Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud-native security, GCP, AWS, DevSecOps, Cloud Security, Terraform, Chef, Go, Python, Ruby, containerised workloads, image scanning, K8s RBAC, runtime security tools, Linux internals, OS hardening, IP protocols, TLS/SSL, DNSSEC, BGP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides cloud-based identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6671260</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>99450ad6-e3b</externalid>
      <Title>Network Engineer - AI/HPC</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are seeking a skilled Network Engineer to join our team at xAI. As a Network Engineer, you will play a critical role in designing and operating large-scale networks for our AI and HPC systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and operate large-scale networks with a deep understanding of congestion control on ethernet and Infiniband</li>
<li>Develop and optimize network configurations to ensure high performance and availability</li>
<li>Collaborate with the team to design the next iteration of our backend and front-end networks</li>
<li>Travel to Memphis to build capacity and participate in a team on-call rotation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Minimum of 10 years designing and operating large-scale networks with 5 years in the ethernet AI/HPC space</li>
<li>Deep understanding of congestion control on ethernet with Infiniband an added bonus</li>
<li>Expertise in creating a portfolio of metrics for performance and operations to optimize the fleet for training and inference traffic</li>
<li>Experience with Python to automate away repetitive tasks and facilitate daily job working with and analyzing large sets of data</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Opportunity to work with a highly motivated team focused on engineering excellence</li>
<li>Collaborative and dynamic work environment</li>
<li>Professional development opportunities</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work on cutting-edge AI and HPC projects</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and experienced Network Engineer looking for a new challenge, please submit your application, including your resume and cover letter, to [insert contact information].</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>RoCEv2, NCCL, Python, Ethernet, Infiniband, AI training and inference workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The company has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4946691007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a45e2e8c-400</externalid>
      <Title>Staff Software Engineer, Foundational Model Serving</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Foundation Model Serving is the API Product for hosting and serving frontier AI model inference for open source models like Llama, Qwen, and GPT OSS as well as proprietary models like Claude and OpenAI GPT.</p>
<p>We&#39;re looking for engineers who have owned high scale operational sensitive systems like customer facing APIs, Edge Gateways, ML Inference, or similar services and have an interest in getting deep building LLM APIs and runtimes at scale. As a Staff Engineer, you&#39;ll play a critical role in shaping both the product experience and core infrastructure.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and implement core systems and APIs that power Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.</li>
<li>Partner with product and engineering leadership to define the technical roadmap and long-term architecture for serving workloads.</li>
<li>Drive architectural decisions and trade-offs to optimize performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.</li>
<li>Contribute directly to key components across the serving infrastructure , from working in systems like vLLM and SGLang to creating token based rate limiters and optimizers , ensuring smooth and efficient operations at scale.</li>
<li>Collaborate cross-functionally with product, platform, and research teams to translate customer needs into reliable and performant systems.</li>
<li>Establish best practices for code quality, testing, and operational readiness, and mentor other engineers through design reviews and technical guidance.</li>
<li>Represent the team in cross-organizational technical discussions and influence Databricks’ broader AI platform strategy.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years of experience building and operating large-scale distributed systems.</li>
<li>Experience leading high-scale operationally sensitive backend systems.</li>
<li>A track record of up-leveling teams engineering excellence.</li>
<li>Strong foundation in algorithms, data structures, and system design as applied to large-scale, low-latency serving systems.</li>
<li>Proven ability to deliver technically complex, high-impact initiatives that create measurable customer or business value.</li>
<li>Strong communication skills and ability to collaborate across teams in fast-moving environments.</li>
<li>Strategic and product-oriented mindset with the ability to align technical execution with long-term vision.</li>
<li>Passion for mentoring, growing engineers, and fostering technical excellence.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>large-scale distributed systems, high-scale operationally sensitive backend systems, algorithms, data structures, system design, low-latency serving systems, GPU serving workloads, vLLM, SGLang, token based rate limiters, optimizers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8224683002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac0b2f4-6c9</externalid>
      <Title>Member of Technical Staff - Imagine Product</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>
<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>
<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>
<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>
<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>
<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>
<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>
<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>
<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>
<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>
<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>
<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>
<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>
<li>Background in AI-driven consumer products or media generation technologies.</li>
<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://xAI.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052027007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>18ae1499-b22</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f43bb14-3c4</externalid>
      <Title>Senior Cloud Engineer</Title>
      <Description><![CDATA[<p>Shield AI is seeking a Senior Cloud Engineer to support its leadership in applied artificial intelligence development. In this role, you will be responsible for engineering, deploying, provisioning, and managing critical cloud systems that drive innovation across Shield AI&#39;s public and private cloud environments, both domestically and internationally.</p>
<p>As part of the Cloud and Infrastructure team within Enterprise Operations, you will play a key role in ensuring the performance, scalability, and reliability of these systems to support various business units. This position may involve occasional travel to Shield AI locations.</p>
<p><strong>Responsibilities:</strong></p>
<p><strong>Engineering:</strong></p>
<ul>
<li>Manage and optimize multi-cloud infrastructure (Azure, AWS) for performance, reliability, and scalability.</li>
<li>Support and optimize cloud and virtual machine environments, assisting with capacity planning, performance monitoring, security compliance, and vulnerability remediation.</li>
<li>Assist in implementing and maintaining infrastructure systems, including servers, storage, backup solutions, and disaster recovery processes, for both public and private clouds.</li>
<li>Continuously learn and adapt to emerging technologies and platforms, leveraging automation wherever possible.</li>
<li>Author and produce the necessary documentation for engineered and maintained systems along with associated processes that supporting teams can leverage.</li>
<li>Assist in researching, recommending, and developing innovative solutions for complex requirements and issue resolution.</li>
<li>Collaborate cross-functionally with AI, DevOps, and Security teams to ensure compliance, observability, and resilience in mission-critical environments.</li>
<li>Participate in Agile methodologies and sound engineering principles.</li>
</ul>
<p><strong>Operations and Support:</strong></p>
<ul>
<li>Perform daily system monitoring, verifying the integrity and availability of all server resources, systems and key processes, reviewing system and application logs.</li>
<li>Support system maintenance and upgrades, including OS patching, software configuration, hardware updates, and performance tuning to ensure optimal cloud infrastructure performance.</li>
<li>Provide escalated support for operational issues possibly during and after normal business hours for systems, workloads, and Kubernetes AI infrastructure.</li>
<li>Analyze, troubleshoot and resolve system infrastructure and software issues.</li>
<li>Ability to participate in on-call, emergency, or maintenance roles</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or related field, or equivalent experience (4+ years) plus an engineer level certification, Azure/AWS Associate, or another similar level certification.</li>
<li>4 years’ experience supporting applications and systems in a production environment in high-availability, mission-critical, or defense-grade environments preferred.</li>
<li>Comfortable with operational efficiencies utilizing Infrastructure as Code (IaC) solutions (e.g., Terraform, Ansible).</li>
<li>Strong understanding of networking concepts (VPCs, VPNs, subnets, routing, firewalls).</li>
<li>Experience in automating repetitive tasks using scripting languages such as PowerShell, Python, or Bash.</li>
<li>Experience with deployment and systems administration of at least one type of Linux distribution (i.e. RHEL, Ubuntu)</li>
<li>Experience with concepts of Microsoft Windows Server administration, Azure and Active Directory environments</li>
<li>Possesses organizational skills, with a process-oriented mindset, attention to detail, and effective verbal and written communication abilities.</li>
<li>Ability to work independently to accomplish assigned tasks.</li>
<li>Solution-oriented, constructive approach to problem-solving.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience deploying and maintaining workloads in Azure public cloud environments.</li>
<li>Hands-on experience with containerization and Kubernetes-based workloads.</li>
<li>Strong understanding of virtualization and private cloud platforms (e.g., VMware, Hyper-V, KVM).</li>
<li>Background in DevOps, Site Reliability Engineering (SRE), or cloud infrastructure roles.</li>
<li>Proficiency with configuration management and automation tools (e.g., Ansible, Chef, Puppet, Terraform).</li>
<li>Experience building and optimizing CI/CD pipelines.</li>
</ul>
<p><strong>Salary and Benefits:</strong></p>
<ul>
<li>$110,000 - $170,000 a year</li>
<li>Full-time regular employee offer package: Pay within range listed + Bonus + Benefits + Equity</li>
<li>Temporary employee offer package: Pay within range listed above + temporary benefits package (applicable after 60 days of employment)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $170,000 a year</Salaryrange>
      <Skills>Cloud Engineering, Multi-cloud infrastructure, Azure, AWS, Networking concepts, Infrastructure as Code, Scripting languages, Linux distribution, Microsoft Windows Server administration, Active Directory environments, Containerization, Kubernetes-based workloads, Virtualization, Private cloud platforms, DevOps, Site Reliability Engineering, Configuration management, Automation tools, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/702e2609-db48-49ab-8bec-d405c956a6ce</Applyto>
      <Location>San Diego, California / Dallas, Texas / San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>93a43345-780</externalid>
      <Title>FinOps Program Manager</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life.</p>
<p>The FinOps function is responsible for financial accountability, visibility, and optimization across all engineering-related spend at Plaid. This includes cloud infrastructure, AI/ML and data workloads, third-party SaaS tools, and other technical investments that support Plaid&#39;s products and internal platforms.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Monitors and analyzes engineering spend across cloud, AI/ML, data platforms, and SaaS, identifying trends, anomalies, and optimization opportunities.</li>
<li>Builds and maintains forecasts for engineering spend, partnering with Finance and engineering leaders to understand drivers, assumptions, and risks.</li>
<li>Partners with engineering, product, and TPMs to incorporate cost considerations into roadmaps, architectural decisions, and execution plans.</li>
<li>Leads cost optimization initiatives, such as rightsizing, commitment strategies, and workload efficiency improvements, in collaboration with engineering owners.</li>
<li>Creates and maintains dashboards and reporting that make spend understandable and actionable for both engineers and executives.</li>
<li>Implements FinOps practices and processes, including showback/chargeback models, unit economics, and cost ownership frameworks.</li>
<li>Partners on tooling and automation, working with data and engineering teams to improve cost visibility, forecasting accuracy, and operational efficiency.</li>
<li>Drives alignment and behavior change, helping teams balance cost, performance, reliability, and velocity through data-driven decision making.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>6–10+ years of relevant experience working at the intersection of engineering, infrastructure, data, or finance in a cloud-native or SaaS environment.</li>
<li>Proven experience partnering closely with engineering teams to influence decisions involving cloud infrastructure, data platforms, AI/ML workloads, or SaaS spend.</li>
<li>Working understanding of modern cloud-native architectures, including core components such as compute, storage, networking, data pipelines, and managed services,enough to engage credibly with engineers on design, tradeoffs, and cost drivers.</li>
<li>Strong foundation in cost analysis, forecasting, budgeting, and variance management, with the ability to translate data into clear, actionable insights.</li>
<li>Comfort working directly with data, including writing SQL (or effectively using AI-assisted tools to do so) to explore datasets, validate assumptions, and answer ad hoc questions.</li>
<li>Experience building clear, high-quality dashboards and BI artifacts that are not only accurate, but intuitive and delightful for engineers and leaders to use.</li>
<li>Demonstrated success driving adoption and behavior change,embedding cost awareness into day-to-day engineering workflows, not just producing reports.</li>
<li>Experience owning and delivering cross-functional programs end-to-end, often without direct authority or a dedicated team.</li>
<li>Familiarity with FinOps principles and practices (e.g., shared ownership, showback/chargeback, unit economics, optimization strategies).</li>
<li>Strong communication skills, with the ability to tailor complex technical and financial concepts for engineering, finance, and executive audiences.</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Hands-on familiarity with cloud cost management tools (e.g., AWS Cost Explorer, GCP Billing, Azure Cost Management, CloudHealth, Cloudability, or similar).</li>
<li>Experience working with or supporting data platforms and AI/ML workloads, including understanding cost drivers for batch processing, streaming, storage, and model training/inference.</li>
<li>Exposure to showback/chargeback models, cost allocation strategies, or product-level unit economics.</li>
<li>Experience improving data models or pipelines that support analytics, reporting, or financial attribution.</li>
<li>Familiarity with BI tools such as Mode, Tableau, Looker, or similar,and a strong eye for dashboard usability and design.</li>
<li>Background in a technical role (e.g., engineering, TPM, infra, data, or engineering operations) before moving into a more cross-functional or business-oriented position.</li>
<li>Experience operating in a high-growth or rapidly scaling environment, where cost structures and investment priorities are evolving quickly.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$172,800-$259,200 per year</Salaryrange>
      <Skills>cloud infrastructure, AI/ML, data platforms, SaaS, cost analysis, forecasting, budgeting, variance management, SQL, data visualization, dashboard creation, cross-functional program management, FinOps principles, showback/chargeback models, unit economics, optimization strategies, cloud cost management tools, data platforms and AI/ML workloads, cost allocation strategies, product-level unit economics, BI tools, technical role background, high-growth environment experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides a platform for developers to connect their financial accounts to various applications and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/acb399b1-e0f8-45f3-bffa-c89c9c573a12</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ffccb977-f95</externalid>
      <Title>Senior Site Reliability Engineer</Title>
      <Description><![CDATA[<p>Are you excited by the idea of building fast, reliable, and intelligent infrastructure for a product used by engineering teams around the world? We&#39;re looking for a Senior Site Reliability Engineer to join the Backstage team at Spotify. We&#39;re building the next generation of our developer platform , one that doesn&#39;t just manage software, but actively helps create and maintain it through AI-native workflows.</p>
<p>In 2026, SRE isn&#39;t just about uptime; it&#39;s about symbiosis. As part of our growing engineering team, you&#39;ll design, build, and operate the cloud infrastructure behind our external developer portal product and our internal fleet of background coding agents. You&#39;ll collaborate closely with experienced engineers (both human and AI-assisted) while operating at real-world scale, with deep observability, strong safety boundaries, and the unique reliability challenges of agentic production systems.</p>
<p>Backstage is more than just a platform , it&#39;s a foundational force in the developer community. Born out of Spotify&#39;s quest for better developer tooling, Backstage now powers developer portals across the globe. But we didn&#39;t stop at catalogs and templates. Today, Backstage is becoming the command center for AI-native engineering. From enterprises orchestrating large-scale migrations to fast-moving teams using AI to improve velocity and quality, our solutions are redefining what great developer experience looks like.</p>
<p>As part of the Backstage team, you&#39;ll shape developer experience for companies large and small, for our thriving open-source community, and for Spotify itself. You&#39;ll help define how reliable, secure infrastructure enables the next wave of agentic developer tooling.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own fleet reliability. Lead the reliability, security, and scalability strategy for Portal&#39;s SaaS infrastructure, including the runtime environments that power our platform and LLM-driven agent workflows. Define SLOs, drive capacity planning, and ensure our systems meet the demands of a rapidly growing product.</li>
</ul>
<ul>
<li>Architect for the agentic era. Design and evolve infrastructure on GCP and AWS using Terraform and infrastructure-from-code patterns. Shape how we structure environments for non-deterministic AI workloads , including sandboxing, resource isolation, cost governance, and security boundaries.</li>
</ul>
<ul>
<li>Drive operational excellence. Evolve our incident management, on-call, and postmortem practices. Leverage AI assistants to accelerate root cause analysis and build increasingly self-healing capabilities into our production systems.</li>
</ul>
<ul>
<li>Lead fullstack reliability. Operate across a modern web stack (TypeScript, React, Python). While not frontend-heavy, you&#39;ll diagnose and resolve issues across the stack and drive reliability improvements end-to-end.</li>
</ul>
<ul>
<li>Mentor and multiply. Raise the reliability IQ of the broader engineering team. Establish SRE best practices, conduct production-readiness reviews, and mentor engineers on operational thinking.</li>
</ul>
<ul>
<li>Shape the roadmap. Partner with engineering and product leadership to evolve our infrastructure in step with generative AI features. Translate operational insights into strategic input on the product roadmap.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>You have 5+ years of hands-on experience operating cloud infrastructure (GCP and/or AWS), using Terraform and Kubernetes to run production systems at scale.</li>
</ul>
<ul>
<li>You have practical experience , or a strong demonstrated interest , in operating LLM-based systems, RAG pipelines, or agentic workloads, and understand the reliability challenges of non-deterministic systems.</li>
</ul>
<ul>
<li>You think in distributed systems first principles , consistency, availability, partition tolerance , and translate that thinking into pragmatic infrastructure decisions.</li>
</ul>
<ul>
<li>You are proficient in at least one modern language (TypeScript, Java, Go, or Python) and comfortable navigating large, heterogeneous codebases, including environments where AI-generated PRs are common.</li>
</ul>
<ul>
<li>You build automation and improve systems so that whole categories of operational issues disappear over time.</li>
</ul>
<ul>
<li>You communicate complex infrastructure trade-offs clearly to both technical and non-technical stakeholders, and you write postmortems that lead to meaningful change.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$164,448–$234,926 USD</Salaryrange>
      <Skills>cloud infrastructure, Terraform, Kubernetes, LLM-based systems, RAG pipelines, agentic workloads, distributed systems, TypeScript, Java, Go, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that provides access to millions of songs. It was founded in 2006 and has since become one of the largest music streaming services in the world.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/fdfe281d-889c-478a-8f27-c9bc36b2b0cf</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>a560bd4c-a1a</externalid>
      <Title>Cloud Security Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Cloud Security Engineer to join our team. As a Cloud Security Engineer at Starling, you&#39;ll be building and supporting tooling and infrastructure that spans across AWS and GCP supporting our internal operations and interfacing with other teams to deliver the services that support our business.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Engineer Secure Foundations: You will lead the design and implementation of critical security services, with a heavy focus on building robust Identity and Access Management (IAM) systems and automated, API-driven certificate management workflows.</li>
<li>Security-as-Code &amp; Scalability: Leveraging a software-first philosophy, you will develop and maintain high-quality, scalable security tooling and middleware within ECS and Kubernetes environments, ensuring security logic is integrated directly into the deployment pipeline.</li>
<li>Collaborative Code Ownership: You will serve as a technical authority in cross-functional code reviews, acting as an engineering peer who helps teams bake security into their services from the first line of code to the final pull request.</li>
<li>Proactive System Hardening: You will stay ahead of the evolving threat landscape by treating security as a continuous engineering challenge,proactively identifying vulnerabilities and architecting technical solutions to fortify our global ecosystem.</li>
</ul>
<p>Professional Requirements:</p>
<ul>
<li>Demonstrated ability to architect secure, distributed systems with a focus on programmatic IAM and automated, API-driven PKI management.</li>
<li>Extensive experience with Infrastructure as Code (IaC) in Terraform and a deep commitment to writing clean, maintainable, and production-grade code,ideally in Golang.</li>
<li>A test-first mentality toward security, with experience building unit and integration tests into CI/CD pipelines to ensure that security guardrails are as reliable as the features they protect.</li>
<li>A strong conceptual grasp of cryptographic primitives and hands-on experience securing containerized workloads and service meshes within ECS and Kubernetes.</li>
<li>A track record of taking end-to-end ownership of complex technical projects, from initial design docs and RFCs through to deployment and observability.</li>
<li>A belief that if it isn&#39;t tested, it&#39;s broken, and a drive to proactively identify and fix vulnerabilities by treating security as a continuous engineering challenge.</li>
</ul>
<p>Our Team Philosophy:
The Security Engineering team is a diverse and dynamic group passionate about building secure and resilient systems. We&#39;re enthusiastic about security, but we&#39;re not about rigid, one-size-fits-all controls. We believe in striking a balance between protecting our systems and empowering our developers to build and innovate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud Security, AWS, GCP, Identity and Access Management, API-driven Certificate Management, Infrastructure as Code, Terraform, Golang, Cryptographic Primitives, Containerized Workloads, Service Meshes</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a fully licensed UK bank with over 3,000 employees across four offices.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3B7E26FC24</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>871d4845-25a</externalid>
      <Title>Software Engineer, DevOps, Research Platform</Title>
      <Description><![CDATA[<p>We are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.</p>
<p>As a DevOps/Platform Engineer, your responsibilities will include designing and implementing complex systems, building flexible yet solid and accessible development environment for researchers, designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines, optimizing a variety of builds, building strong relationships with researchers, communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.</p>
<p>About you:</p>
<ul>
<li>5+ years of successful experience in a similar DX / DevOps / SRE role.</li>
<li>Proficiency in software development (Python, Go...) and programming best practices.</li>
<li>Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...</li>
<li>Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...</li>
<li>Technical product mindset (e.g. understanding how to debug poor adoption).</li>
<li>Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).</li>
<li>Ownership, high agency and constantly seeking to learn and improving things for others.</li>
<li>Autonomous, self-driven and able to work well in a fast-paced startup environment.</li>
<li>Low ego and team spirit mindset.</li>
</ul>
<p>Your application will be all the more interesting if you also have:</p>
<ul>
<li>First hand Bazel (or equivalent) experience.</li>
<li>Strong knowledge of Python&#39;s ecosystem.</li>
<li>Familiarity with GPU based workloads and ecosystems.</li>
<li>Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development, Python, Go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, Bazel, Python&apos;s ecosystem, GPU based workloads and ecosystems, full remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>a51375e8-30e</externalid>
      <Title>Member of Technical Staff, Software Co-Design AI HPC Systems</Title>
      <Description><![CDATA[<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost. Our work spans today&#39;s frontier AI workloads and directly shapes the next generation of accelerators, system architectures, and large-scale AI platforms. We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures. This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale. In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>
<p>About the Team</p>
<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>
<p>Microsoft Superintelligence Team</p>
<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in—come and join us as we work on our next generation of models!</p>
<p>Responsibilities</p>
<p>Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks. Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements. Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems. Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps. Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations. Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs. Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams. Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, Experience designing or operating large-scale AI clusters for training or inference, Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications, Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Background in performance modeling and capacity planning for future hardware generations, Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets software products and services. It is one of the largest and most successful technology companies in the world.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-3/</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>cd1a0d16-311</externalid>
      <Title>Member of Technical Staff, Software Co-Design AI HPC Systems</Title>
      <Description><![CDATA[<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost.</p>
<p>We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures.</p>
<p>This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale.</p>
<p>In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>
<p>Microsoft Superintelligence Team
Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>
<p>Responsibilities
Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks.</p>
<p>Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements.</p>
<p>Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems.</p>
<p>Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps.</p>
<p>Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations.</p>
<p>Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs.</p>
<p>Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams.</p>
<p>Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>
<p>Qualifications
Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Additional or Preferred Qualifications
Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Strong background in one or more of the following areas: AI accelerator or GPU architectures Distributed systems and large-scale AI training/inference High-performance computing (HPC) and collective communications ML systems, runtimes, or compilers Performance modeling, benchmarking, and systems analysis Hardware–software co-design for AI workloads Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development.</p>
<p>Proven ability to work across organizational boundaries and influence technical decisions involving multiple stakeholders. Experience designing or operating large-scale AI clusters for training or inference. Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications. Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand). Background in performance modeling and capacity planning for future hardware generations. Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews. Publications, patents, or open-source contributions in systems, architecture, or ML systems are a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, LLMs, multimodal models, or recommendation systems, and their systems-level implications, Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Performance modeling and capacity planning for future hardware generations, Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets software products and services. It is one of the largest and most successful technology companies in the world.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>139cd1f4-231</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>At Anthropic, we are building some of the most complex and large-scale AI infrastructure in the world. As that infrastructure scales rapidly, so does the imperative to optimise how we use it. As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable—without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimisation frameworks that ensure every dollar of our infrastructure investment delivers maximum value. This is a high-impact, cross-functional role at the intersection of systems engineering, financial optimisation, and AI infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilisation, and costs across our cloud and datacentre fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimise their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimise cluster configurations, workload placement, and resource utilisation across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimisations across multiple services and platforms to deliver measurable utilisation and performance gains.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimising end-to-end performance of distributed systems, including workload right-sizing and resource utilisation tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilisation monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills—you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimisation and scaling distributed systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, performance optimisation, scalability, continuous improvement, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, systems design tradeoffs, published work in performance optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation building some of the most complex and large-scale AI infrastructure in the world.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>3cc256d7-b0a</externalid>
      <Title>Transaction Manager</Title>
      <Description><![CDATA[<p>As a Transaction Manager at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our data centre capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems, requiring you to bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Help identify data centre capacity opportunities and options through management of network relationships across data centre developer, broker, and power contacts.</li>
<li>Lead the RFP and commercial sourcing process for specific data centre deals, managing developer outreach, proposal evaluation, and competitive selection processes</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement, coordinating due diligence teams, internal and external legal counsel, network organisation, platform engineers, and finance organisation to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact (SPOC) for auxiliary organisations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing process status</li>
<li>Develop and maintain transaction timelines, tracking critical path items and proactively identifying risks that could impact deal closure</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data centre leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Demonstrate exceptional communication skills, able to serve as an effective liaison between internal stakeholders and external partners</li>
<li>Are highly organised with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Have a collaborative mindset and can build trust with diverse stakeholder groups across the organisation</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data centre or hyperscale infrastructure transactions specifically</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>
<li>Possess familiarity with data centre developer ecosystems and market dynamics</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
<li>Understand utility coordination, power procurement, or energy considerations in data centre transactions</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
</ul>
<p>The annual compensation range for this role is $365,000 - $435,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000 - $435,000 USD</Salaryrange>
      <Skills>transaction management, commercial real estate, data centre leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, communication skills, collaboration, attention to detail, data centre or hyperscale infrastructure transactions, AI/ML workloads, legal teams, data centre developer ecosystems, high-growth technology companies, utility coordination, power procurement, energy considerations, corporate development, strategic partnerships, infrastructure investment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company aims to build beneficial AI systems that are safe and beneficial for users and society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5099080008</Applyto>
      <Location>San Francisco, CA, New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>5ef0c826-856</externalid>
      <Title>Engineering Manager, Safeguards Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly — and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organisation access that data safely and ergonomically.</p>
<p>As Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements — and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities</li>
<li>Own the strategy and execution for porting the safeguards offline data stack — including PII storage and tooling — across new cloud and deployment environments as Anthropic expands</li>
<li>Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints</li>
<li>Drive tooling and architecture decisions that maximise data retention within the bounds of our privacy and compliance requirements</li>
<li>Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)</li>
<li>Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts</li>
<li>Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination</li>
<li>Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals</li>
<li>Partner with recruiting to attract, hire, and retain strong engineering talent</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 4+ years of front-line engineering management experience</li>
<li>Have a track record of leading teams that build and operate data infrastructure at scale</li>
<li>Have hands-on software engineering experience as an individual contributor prior to moving into management</li>
<li>Have a strong understanding of data privacy principles, PII handling, and compliance frameworks</li>
<li>Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities</li>
<li>Have experience working cross-functionally across infrastructure, product, and compliance or security teams</li>
<li>Are clear and persuasive communicators, both in writing and in person</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with multi-cloud or multi-region data portability, particularly in regulated environments</li>
<li>Have built privacy-preserving data pipelines or interfaces for ML workloads</li>
<li>Have experience with enterprise data contracts or zero data retention architectures</li>
<li>Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data</li>
<li>Have a passion for building diverse and inclusive teams</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000USD
£325,000 - £390,000GBP</Salaryrange>
      <Skills>data infrastructure, data privacy, compliance frameworks, software engineering, team management, cross-functional collaboration, communication, data portability, multi-cloud, multi-region, regulated environments, privacy-preserving data pipelines, ML workloads, enterprise data contracts, zero data retention architectures, in-memory storage, compute for sensitive data, novel approaches to data processing, diverse and inclusive teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5103078008</Applyto>
      <Location>London, UK; New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>325c968b-d59</externalid>
      <Title>Inference Technical Lead, Sora</Title>
      <Description><![CDATA[<p><strong>Inference Technical Lead, Sora</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Research</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.</p>
<p>_<strong>This role is critical to scaling the team’s broader goals - it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.</strong>_</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Perform engineering efforts focused on improving model serving, inference performance, and system efficiency</li>
</ul>
<ul>
<li>Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability</li>
</ul>
<ul>
<li>Partner closely with research and product teams to ensure our models perform effectively at scale</li>
</ul>
<ul>
<li>Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have deep expertise in model performance optimization, particularly at the inference layer</li>
</ul>
<ul>
<li>Have a strong background in kernel-level systems, data movement, and low-level performance tuning</li>
</ul>
<ul>
<li>Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads</li>
</ul>
<ul>
<li>Can navigate ambiguity, set technical direction, and drive complex initiatives to completion</li>
</ul>
<p>_<strong>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong>_</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K • Offers Equity</Salaryrange>
      <Skills>GPU Inference Engineer, Model Performance Optimization, Kernel-Level Systems, Data Movement, Low-Level Performance Tuning, AI Systems, Multimodal Workloads, Complex Initiatives, Technical Direction</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3c2d1178-777f-4613-a084-75a3d37cd1af</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>d3a39f4c-d95</externalid>
      <Title>Software Engineer, Inference - Multi Modal</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Inference - Multi Modal</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $555K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we partner closely with Research to bring the next generation of models into the world. We&#39;re a small, fast-moving team of engineers focused on delivering a world-class developer experience while pushing the boundaries of what AI can do.</p>
<p>We’re expanding into multimodal inference, building the infrastructure needed to serve models that handle image, audio, and other non-text modalities. These workloads are inherently more heterogeneous and experimental, involving diverse model sizes and interactions, more complex input/output formats, and tighter coordination with product and research.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a software engineer to help us serve OpenAI’s multimodal models at scale. You’ll be part of a small team responsible for building reliable, high-performance infrastructure for serving real-time audio, image, and other MM workloads in production.</p>
<p>This work is inherently cross-functional: you’ll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. You&#39;ll build and optimize the systems that let users generate speech, understand images, and interact with models in ways far beyond text.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and implement inference infrastructure for large-scale multimodal models.</li>
</ul>
<ul>
<li>Optimize systems for high-throughput, low-latency delivery of image and audio inputs and outputs.</li>
</ul>
<ul>
<li>Enable experimental research workflows to transition into reliable production services.</li>
</ul>
<ul>
<li>Collaborate closely with researchers, infra teams, and product engineers to deploy state-of-the-art capabilities.</li>
</ul>
<ul>
<li>Contribute to system-level improvements including GPU utilization, tensor parallelism, and hardware abstraction layers.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience building and scaling inference systems for LLMs or multimodal models.</li>
</ul>
<ul>
<li>Have worked with GPU-based ML workloads and understand the performance dynamics of large models, especially with complex data like images or audio.</li>
</ul>
<ul>
<li>Enjoy experimental, fast-evolving work and collaborating closely with research.</li>
</ul>
<ul>
<li>Are comfortable dealing with systems that span networking, distributed compute, and high-throughput data handling.</li>
</ul>
<ul>
<li>Have familiarity with inference tooling like vLLM, TensorRT-LLM, or custom model parallel systems.</li>
</ul>
<ul>
<li>Own problems end-to-end and are excited to operate in ambiguous, fast-moving spaces.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience working with image generation or audio synthesis models in production.</li>
</ul>
<ul>
<li>Exposure to distributed ML training or system-efficient model design.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $555K • Offers Equity</Salaryrange>
      <Skills>Software Engineer, Inference Infrastructure, GPU-based ML Workloads, Tensor Parallelism, Hardware Abstraction Layers, vLLM, TensorRT-LLM, Custom Model Parallel Systems, Image Generation, Audio Synthesis, Distributed ML Training, System-Efficient Model Design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4d14449e-5e7f-45d4-b103-8776a6c87086</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9f9ededf-ecb</externalid>
      <Title>Software Engineer, Frontier Clusters Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Frontier Clusters Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Frontier Systems team at OpenAI builds, launches, and supports the largest supercomputers in the world that OpenAI uses for its most cutting edge model training.</p>
<p>We take data center designs, turn them into real, working systems and build any software needed for running large-scale frontier model trainings.</p>
<p>Our mission is to bring up, stabilize and keep these hyperscale supercomputers reliable and efficient during the training of the frontier models.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for engineers to operate the next generation of compute clusters that power OpenAI’s frontier research.</p>
<p>This role blends distributed systems engineering with hands-on infrastructure work on our largest datacenters. You will scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layer that hides the complexity of a magnitude of nodes across multiple data centers.</p>
<p>You will work at the intersection of hardware and software, where speed and reliability are critical. Expect to manage fast-moving operations, quickly diagnose and fix issues when things are on fire, and continuously raise the bar for automation and uptime.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle management</li>
</ul>
<ul>
<li>Build software abstractions that unify multiple clusters and present a seamless interface to training workloads</li>
</ul>
<ul>
<li>Own node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scale</li>
</ul>
<ul>
<li>Improve operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cycles</li>
</ul>
<ul>
<li>Integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure</li>
</ul>
<ul>
<li>Develop monitoring and observability systems to detect issues early and keep clusters stable under extreme load</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have deep experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environments</li>
</ul>
<ul>
<li>Bring strong programming or scripting skills (Python, Go, or similar) and familiarity with Infrastructure-as-Code tools such as Terraform or CloudFormation</li>
</ul>
<ul>
<li>Are comfortable with bare-metal Linux environments, GPU hardware, and large-scale networking</li>
</ul>
<ul>
<li>Enjoy solving fast-moving, high-impact operational problems and building automation to eliminate manual work</li>
</ul>
<ul>
<li>Can balance careful engineering with the urgency of keeping mission-critical systems running</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Experience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environments</li>
</ul>
<ul>
<li>Strong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloads</li>
</ul>
<ul>
<li>Proficiency in cloud infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operations</li>
</ul>
<p>_Bonus: background with GPU workloads, firmware management, or high-performance computing_</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $490K • Offers Equity</Salaryrange>
      <Skills>Kubernetes, Python, Go, Terraform, CloudFormation, Linux, GPU hardware, Large-scale networking, Infrastructure-as-Code, Cloud infrastructure concepts, Containerized workloads, Distributed systems engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/770d5c3f-4e72-4b49-aec4-d444e8ad7a64</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>43ec3483-7d3</externalid>
      <Title>Software Engineer, Fleet Infrastructure</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Fleet Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>Job Description</strong></p>
<p>This role will support the fleet infrastructure team at OpenAI. The fleet team focuses on running the world’s largest, most reliable, and frictionless GPU fleet to support OpenAI’s general purpose model training and deployment. Work on this team ranges from</p>
<ul>
<li>Maximizing GPUs doing useful work by building user-friendly scheduling and quota systems</li>
</ul>
<ul>
<li>Running a reliable and low maintenance platform by building push-button automation for kubernetes cluster provisioning and upgrades</li>
</ul>
<ul>
<li>Supporting research workflows with service frameworks and deployment systems</li>
</ul>
<ul>
<li>Ensuring fast model startup times though high performance snapshot delivery across blob storage down to hardware caching</li>
</ul>
<ul>
<li>Much more!</li>
</ul>
<p><strong>About the Role</strong></p>
<p>As an engineer within Fleet infrastructure, you will design, write, deploy, and operate infrastructure systems for model deployment and training on one of the world’s largest GPU fleet. The scale is immense, the timelines are tight, and the organization is moving fast; this is an opportunity to shape a critical system in support of OpenAI&#39;s mission to advance AI capabilities responsibly.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, implement and operate components of our compute fleet including job scheduling, cluster management, snapshot delivery, and CI/CD systems.</li>
</ul>
<ul>
<li>Interface with researchers and product teams to understand workload requirements</li>
</ul>
<ul>
<li>Collaborate with hardware, infrastructure, and business teams to provide a high utilization and high reliability service</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience with hyperscale compute systems</li>
</ul>
<ul>
<li>Possess strong programming skills</li>
</ul>
<ul>
<li>Have experience working in public clouds (especially Azure)</li>
</ul>
<ul>
<li>Have experience working in Kubernetes</li>
</ul>
<ul>
<li>Execution focused mentality paired with a rigorous focus on user requirements</li>
</ul>
<ul>
<li>As a bonus, have an understanding of AI/ML workloads</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $490K • Offers Equity</Salaryrange>
      <Skills>hyperscale compute systems, programming skills, public clouds (especially Azure), Kubernetes, execution focused mentality, AI/ML workloads, hyperscale compute systems, programming skills, public clouds (especially Azure), Kubernetes, execution focused mentality, AI/ML workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a58add97-1968-4d5c-b504-ab62bea12df3</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>3b3a14ec-b2e</externalid>
      <Title>Member of Technical Staff - Engineering Manager, Copilot Memory and Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Engineering Manager, Copilot Memory and Personalization at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Engineering Manager, you will build and lead a team of backend and machine learning engineers, including driving project planning, prioritization of work, and designing features. You will guide teams and leads identification of dependencies and the development of design documents for a product, application, service, or platform. You will make hands-on contributions to the codebase and infrastructure. You will guide architecture and design efforts by leading discussions, creating proposals and design documents, and ensuring solutions meet business, security, and compliance requirements. You will ship AI powered experiences that will shape how millions of people will interact with AI in the future. You will drive implementation of features and systems, breaking down long-term goals into clear milestones, aligning with release plans, and ensuring cross-team coordination.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build and lead a team of backend and machine learning engineers, including driving project planning, prioritization of work, and designing features.</li>
<li>Guide teams and leads identification of dependencies and the development of design documents for a product, application, service, or platform.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proven experience building large-scale distributed systems and optimizing workloads for efficiency and scalability.</li>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in AI.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong leadership and management skills.</li>
<li>Excellent communication and collaboration skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large-scale distributed systems, optimizing workloads, AI technologies, machine learning, backend engineering, leadership, management, communication, collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to push the boundaries of AI, aiming to build systems with true artificial intelligence across agents, applications, services, and infrastructure, making AI accessible to all.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-engineering-manager-copilot-memory-and-personalization/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>a507bee2-c87</externalid>
      <Title>Quality Inspector</Title>
      <Description><![CDATA[<p>We&#39;re now looking to expand our Quality Control Department with the addition of a meticulous and dedicated Quality Inspector. This is a key role in ensuring that the components and assemblies we produce and procure meet the exacting standards demanded by world-class competition.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Perform dimensional and visual inspections of mechanical components.</p>
<ul>
<li>Interpret and work from detailed engineering drawings, CAD data, and technical documentation.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Understanding of high-performance manufacturing standards and tolerances.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Understanding of high-performance manufacturing standards and tolerances, Hands-on experience with CMM (ideally both manual and programmable) and conventional inspection equipment, Able to read and interpret engineering drawings and specifications, Highly organised with meticulous attention to detail, Computer literate with strong MS Office skills, Skilled in multitasking and prioritising workloads under pressure, Excellent communication skills, and a keen eye for problem solving, Previous experience in a quality inspection or precision engineering role, ideally within motorsport or automotive</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>M-Sport</Employername>
      <Employerlogo>https://logos.yubhub.co/m-sport.co.uk.png</Employerlogo>
      <Employerdescription>M-Sport is a global motorsport business with state-of-the-art facilities at home and winning performance on the world&apos;s most prestigious rally stages.</Employerdescription>
      <Employerwebsite>https://www.m-sport.co.uk</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.m-sport.co.uk/quality-inspector-wqc251215</Applyto>
      <Location>Brackley</Location>
      <Country></Country>
      <Postedate>2025-12-20</Postedate>
    </job>
    <job>
      <externalid>317e2da0-af4</externalid>
      <Title>Machine Shop Supervisor</Title>
      <Description><![CDATA[<p>Our in-house Machine Shop is a critical part of our success – delivering high-performance components for our competition cars and development projects. We’re now looking for a Machine Shop Supervisor to lead and develop this essential function.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>As the Machine Shop Supervisor, you will lead and manage our team of skilled Machinists to ensure efficient, high-quality production in a safe working environment. Responsible for overseeing the day-to-day operations, scheduling workloads, maintaining equipment, and ensuring compliance with safety and quality standards.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong interpretation of engineering drawings.</li>
<li>Skilled in multitasking and prioritising workloads under pressure.</li>
<li>Excellent communication skills, and a keen eye for problem solving.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong interpretation of engineering drawings, Skilled in multitasking and prioritising workloads under pressure, Excellent communication skills, Two years proven leadership experience in a machine shop environment, Confident in programming HyperMill and Heidenhain controls</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>M-Sport</Employername>
      <Employerlogo>https://logos.yubhub.co/m-sport.co.uk.png</Employerlogo>
      <Employerdescription>M-Sport is a leading motorsport company delivering high-performance components for competition cars and development projects.</Employerdescription>
      <Employerwebsite>https://www.m-sport.co.uk</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.m-sport.co.uk/machine-shop-supervisor-wmch251212</Applyto>
      <Location>Brackley</Location>
      <Country></Country>
      <Postedate>2025-12-20</Postedate>
    </job>
    <job>
      <externalid>48a2e456-8b9</externalid>
      <Title>Rally Technician</Title>
      <Description><![CDATA[<p>We&#39;re looking for a self-motivated team player with a positive and enthusiastic attitude. The ideal candidate will be educated to technician level and have a thorough understanding of motor vehicles and their systems. Highly organised with meticulous attention to detail. Skilled in multitasking and prioritising workloads under pressure. Excellent communication skills, and a keen eye for problem solving. Ability to work both independently and collaboratively as part of a multi-disciplinary team. Willingness to travel globally and work across events and test programmes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>previous experience in a motorsport environment, thorough understanding of motor vehicles and their systems, highly organised with meticulous attention to detail, skilled in multitasking and prioritising workloads under pressure, excellent communication skills, previous experience in a rally, rallycross, or racing environment</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>M-Sport UK</Employername>
      <Employerlogo>https://logos.yubhub.co/m-sport.co.uk.png</Employerlogo>
      <Employerdescription>M-Sport UK is a world-leading motorsport team with a rich heritage in international rallying. From our base in the heart of Cumbria, we design, build and run top-level rally cars that compete and win on stages across the globe.</Employerdescription>
      <Employerwebsite>https://www.m-sport.co.uk</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.m-sport.co.uk/rally-technicians-wwor251215</Applyto>
      <Location>Brackley</Location>
      <Country></Country>
      <Postedate>2025-12-20</Postedate>
    </job>
  </jobs>
</source>