<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>24176cb8-311</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>
<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>
<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>
<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>
<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d4292d1-227</externalid>
      <Title>Software Engineer, Sandboxing (Systems)</Title>
      <Description><![CDATA[<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>
<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>
<p>Responsibilities:</p>
<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>
<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>
<p>Investigate and resolve performance bottlenecks in virtualized environments</p>
<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>
<p>Develop tooling for monitoring and improving virtualization performance</p>
<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>
<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>
<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>
<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>
<p>You may be a good fit if you:</p>
<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>
<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>
<p>Have experience optimizing system performance for compute-intensive workloads</p>
<p>Are familiar with modern CPU architectures and memory systems</p>
<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>
<p>Understand Linux resource management, scheduling, and memory management</p>
<p>Have experience profiling and debugging system-level performance issues</p>
<p>Are comfortable diving into unfamiliar codebases and technical domains</p>
<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>
<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>
<p>Strong candidates may also have experience with:</p>
<p>GPU virtualization and acceleration technologies</p>
<p>Cloud infrastructure at scale (AWS, GCP)</p>
<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>
<p>eBPF programming and kernel tracing tools</p>
<p>OS-level security hardening and isolation techniques</p>
<p>Developing custom scheduling algorithms for specialized workloads</p>
<p>Performance optimization for ML/AI specific workloads</p>
<p>Network stack optimization and high-performance networking</p>
<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>
<p>Representative projects:</p>
<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>
<p>Implementing custom memory management schemes for large-scale distributed training</p>
<p>Developing specialized I/O schedulers to prioritize ML workloads</p>
<p>Creating lightweight virtualization solutions tailored for AI inference</p>
<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>
<p>Enhancing communication between VMs for distributed training workloads</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025591008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09c520cf-f62</externalid>
      <Title>Systems Engineer, Kernel</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>
<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>
<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>
<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>
<p>Our Team&#39;s Stack:</p>
<ul>
<li>Python, Go, bash/sh, C</li>
</ul>
<ul>
<li>Prometheus, Victoria Metrics, Grafana</li>
</ul>
<ul>
<li>Linux Kernel (custom build), Ubuntu</li>
</ul>
<ul>
<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>
</ul>
<ul>
<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>
</ul>
<p>Focus Areas:</p>
<ul>
<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>
</ul>
<ul>
<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>
</ul>
<ul>
<li>Stack-Wide Support – Ensure kernel support and stability across:</li>
</ul>
<ul>
<li>Virtualization (KubeVirt, QEMU, vFIO)</li>
</ul>
<ul>
<li>Container runtimes (containerd, nydus, kubelet)</li>
</ul>
<ul>
<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>
</ul>
<ul>
<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>
</ul>
<ul>
<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>
</ul>
<p>About the role:</p>
<ul>
<li>Triage and fix kernel crashes and performance regressions.</li>
</ul>
<ul>
<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>
</ul>
<ul>
<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>
</ul>
<ul>
<li>Implement diagnostics and tooling for kernel-level observability.</li>
</ul>
<ul>
<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>
</ul>
<ul>
<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>
</ul>
<ul>
<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>
</ul>
<ul>
<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>
</ul>
<ul>
<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>
</ul>
<ul>
<li>Experience working with kernel modules, drivers, and subsystems.</li>
</ul>
<ul>
<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Contributions to the Linux kernel or related open-source projects.</li>
</ul>
<ul>
<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>
</ul>
<ul>
<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>
</ul>
<ul>
<li>GPU/DPU bring-up and driver experience.</li>
</ul>
<ul>
<li>Experience in HPC or large-scale distributed systems.</li>
</ul>
<ul>
<li>Familiarity with QA/QE best practices</li>
</ul>
<ul>
<li>Experience working in Cloud environments</li>
</ul>
<ul>
<li>Experience as a software engineer writing large-scale applications</li>
</ul>
<ul>
<li>Experience with machine learning is a huge bonus</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance &amp; stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4599319006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d9bfb5a-511</externalid>
      <Title>Senior Firmware Engineer, OpenBMC</Title>
      <Description><![CDATA[<p>To accelerate datacenter deployment and management, CoreWeave is expanding its firmware engineering team to focus on developing and maintaining OpenBMC-based firmware for our next-generation Baseboard Management Controllers (BMCs).</p>
<p>As a Senior Firmware Engineer, you will design, implement, and maintain embedded firmware features that enable secure, scalable, and reliable control across CoreWeave&#39;s high-performance compute infrastructure. You will work independently on complex components, collaborate closely with cross-functional teams, and help set best practices for firmware quality and performance.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design &amp; Implement: Develop and enhance OpenBMC firmware in C++ for CoreWeave&#39;s custom server platforms, contributing to key subsystems such as sensor management, power and thermal control, networking, and system monitoring.</li>
</ul>
<ul>
<li>Integrate &amp; Debug: Collaborate with hardware design, platform software, and reliability teams to integrate firmware with new hardware and validate performance across diverse environments.</li>
</ul>
<ul>
<li>Optimize: BMC Performance and Harden Security</li>
</ul>
<ul>
<li>Root Cause Analysis: Perform deep system-level debugging using tools such as GDB, JTAG, or logic analyzers to resolve cross-layer issues between hardware, firmware, and OS.</li>
</ul>
<ul>
<li>Automate &amp; Validate: Contribute to continuous integration and automated testing frameworks for OpenBMC build and validation.</li>
</ul>
<ul>
<li>Document &amp; Share: Maintain clear technical documentation and participate in design reviews to ensure consistency and maintainability across the firmware codebase.</li>
</ul>
<ul>
<li>Collaborate Broadly: Partner with other ICs and technical leads across CoreWeave&#39;s infrastructure engineering, hardware design, and operations teams to align firmware capabilities with platform and datacenter goals.</li>
</ul>
<p>Minimum Qualifications:</p>
<ul>
<li>Experience: 4+ of professional experience in firmware or embedded systems development, including direct work with Linux-based OpenBMC firmware.</li>
</ul>
<ul>
<li>Education: Bachelor&#39;s degree in Computer Engineering, Electrical Engineering, Computer Science, or a related field.</li>
</ul>
<p>Technical Skills:</p>
<ul>
<li>Proficiency in C/C++ for embedded systems.</li>
</ul>
<ul>
<li>Hands-on experience with OpenBMC, Yocto Project, and embedded Linux environments.</li>
</ul>
<ul>
<li>Familiarity with hardware interfaces and protocols (I2C, SPI, UART, GPIO, IPMI, DMTF Redfish)</li>
</ul>
<ul>
<li>Experience with hardware bring-up, board-level debugging, and sensor integration.</li>
</ul>
<ul>
<li>Comfort with Linux kernel configuration, device trees, and BSP-level integration.</li>
</ul>
<ul>
<li>Working knowledge of source code control system such as Git</li>
</ul>
<ul>
<li>Comfort with debugging tools such as GDB JTAG and debugging over serial or remote consoles.</li>
</ul>
<ul>
<li>Basic scripting skills in Python or Bash for build automation and validation.</li>
</ul>
<ul>
<li>Strong problem-solving and analytical thinking; able to break down complex system-level issues.</li>
</ul>
<ul>
<li>Communicates effectively with peers across hardware, firmware, and operations teams.</li>
</ul>
<ul>
<li>Self-driven with a focus on delivering high-quality, maintainable code.</li>
</ul>
<ul>
<li>Thrives in a fast-paced environment and balances multiple priorities effectively.</li>
</ul>
<p>Preferred Qualification:</p>
<ul>
<li>Experience developing CI/CD pipeline for firmware builds and regression testing</li>
</ul>
<ul>
<li>Exposure to large-scale datacenter or HPC environments</li>
</ul>
<ul>
<li>Contributions to open-source firmware projects or upstream Linux development</li>
</ul>
<p>The base salary range for this role is $153,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$153,000 to $242,000</Salaryrange>
      <Skills>C/C++, OpenBMC, Yocto Project, embedded Linux, hardware interfaces, protocols, Linux kernel configuration, device trees, BSP-level integration, source code control system, debugging tools, scripting skills, problem-solving, analytical thinking, CI/CD pipeline, firmware builds, regression testing, large-scale datacenter, HPC environments, open-source firmware projects, upstream Linux development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4452431006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>de168cba-02c</externalid>
      <Title>Principal Software Engineer, Platform Security</Title>
      <Description><![CDATA[<p>We&#39;re looking for a principal-level engineer to serve as a technical leader for platform security across Anduril. This role combines deep expertise in cryptography, systems security, and secure architecture with the ability to drive security strategy across business lines and the platform.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own the technical vision and architecture for platform security across Anduril&#39;s product ecosystem</li>
<li>Design cryptographic systems, protocols, and key management architectures for autonomous and robotic platforms operating in contested and disconnected environments</li>
<li>Lead the design of hardware root-of-trust architectures integrating TPMs, TEEs, HSMs, and secure boot across diverse embedded platforms</li>
<li>Drive the strategy for promoting business-line security implementations into shared, composable platform services</li>
<li>Serve as the senior technical authority for security architecture reviews across the organization, providing definitive guidance on cryptographic design, protocol security, and system hardening</li>
<li>Define security patterns, reference architectures, and engineering standards that enable teams across Anduril to build securely and independently</li>
<li>Mentor and develop senior engineers on the team, raising the bar for security engineering across the organization</li>
<li>Represent Anduril&#39;s security engineering capabilities to customers, partners, and auditors when deep technical credibility is required</li>
<li>Evaluate emerging threats, cryptographic standards, and security technologies, driving adoption where they strengthen the platform</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>12+ years of experience in software engineering, with significant depth in systems security and cryptography</li>
<li>Expert-level knowledge of cryptographic protocol design, including key management architectures, certificate systems, and cryptographic agility</li>
<li>Deep experience with hardware security: TPM, TEE, HSM, secure boot, and hardware root-of-trust design across multiple platform types</li>
<li>Proficient in two or more of: C++, Rust, Go</li>
<li>Experience designing security architectures for embedded, real-time, or robotic systems with constrained environments</li>
<li>Track record of leading cross-organizational technical initiatives and driving architectural decisions that span multiple teams</li>
<li>Strong ability to communicate complex security concepts to engineering leadership, product teams, and external stakeholders</li>
<li>Experience performing and leading threat modeling, security architecture reviews, and cryptographic design reviews</li>
<li>Eligible to obtain and maintain active U.S. Secret security clearance</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with post-quantum cryptography, distributed key generation (DKG), or threshold cryptographic schemes</li>
<li>Background in defense, aerospace, or autonomous systems with exposure to FIPS 140, Common Criteria, or NSA CSfC requirements</li>
<li>Experience designing secure communication protocols for autonomous platforms or mesh networks</li>
<li>Deep knowledge of Linux kernel security, mandatory access controls (SELinux/AppArmor), and OS hardening at scale</li>
<li>Experience building and evolving platform security services consumed by dozens of teams</li>
<li>Familiarity with compliance frameworks (STIGs, NIST 800-53, CMMC) and translating them into engineering controls that don&#39;t compromise developer velocity</li>
<li>Publications, patents, or recognized contributions in cryptography or systems security</li>
<li>Experience with Nix build systems and reproducible build pipelines for security-critical software</li>
</ul>
<p>US Salary Range: $254,000-$336,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$254,000-$336,000 USD</Salaryrange>
      <Skills>cryptography, systems security, secure architecture, cryptographic protocol design, key management architectures, certificate systems, cryptographic agility, hardware security, TPM, TEE, HSM, secure boot, hardware root-of-trust design, embedded systems, real-time systems, robotic systems, constrained environments, cross-organizational technical initiatives, architectural decisions, complex security concepts, threat modeling, security architecture reviews, cryptographic design reviews, U.S. Secret security clearance, post-quantum cryptography, distributed key generation, threshold cryptographic schemes, defense, aerospace, autonomous systems, FIPS 140, Common Criteria, NSA CSfC requirements, secure communication protocols, mesh networks, Linux kernel security, mandatory access controls, OS hardening, compliance frameworks, STIGs, NIST 800-53, CMMC, publications, patents, recognized contributions, Nix build systems, reproducible build pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5087992007</Applyto>
      <Location>Boston, Massachusetts, United States; Costa Mesa, California, United States; Seattle, Washington, United States; Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>901593ac-ffd</externalid>
      <Title>Systems Engineer, MAPS</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p><strong>Available Location:</strong></p>
<p>Austin</p>
<p><strong>About the Department</strong></p>
<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>
<p><strong>About the Team</strong></p>
<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>
<p><strong>What are we looking for?</strong></p>
<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>
<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>
<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>
<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>
<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>
<li>Track record of building long-term sustainable, maintainable systems.</li>
<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>
<li>Experience with one or more of the following programming languages: Go, Rust, C</li>
</ul>
<p><strong>Bonuses</strong></p>
<ul>
<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>
<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>
<li>Experience with large scale configuration/deployment management.</li>
</ul>
<p><strong>What Makes Cloudflare Special?</strong></p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare operates one of the world&apos;s largest networks, powering millions of websites and Internet properties for customers ranging from individual bloggers to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7742773</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3c6419c4-a9b</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimization and scaling distributed systems</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>59e88547-efc</externalid>
      <Title>Senior Software Engineer, Systems</Title>
      <Description><![CDATA[<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand. The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p>Responsibilities</p>
<ul>
<li>Lead infrastructure projects from design through delivery, owning scope, execution, and outcomes</li>
<li>Build and maintain systems that support AI clusters at massive scale (thousands to hundreds of thousands of machines)</li>
<li>Partner with cloud providers and internal teams to solve compute, networking, and reliability challenges</li>
<li>Tackle difficult technical problems in your domain and proactively fill gaps in tooling, documentation, and processes</li>
<li>Contribute to operational practices including incident response, postmortems, and on-call rotations</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Requirements</p>
<ul>
<li>6+ years of software engineering experience</li>
<li>Have led technical projects end-to-end over multiple months, including scoping, breaking down work, and driving delivery</li>
<li>Have deep knowledge of distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Solve hard problems independently and know when to pull others in</li>
<li>Help teammates grow through knowledge sharing and thoughtful technical guidance</li>
<li>Communicate clearly in design docs, presentations, and cross-functional discussions</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Security and privacy best practice expertise</li>
<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000-£325,000 GBP</Salaryrange>
      <Skills>Distributed systems, Reliability, Cloud platforms, Kubernetes, IaC, AWS/GCP, Systems language, Python, Rust, Go, Java, Security and privacy best practice, Machine learning infrastructure, GPUs, TPUs, Trainium, Networking infrastructure, NCCL, Low level systems experience, Linux kernel tuning, eBPF</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that develops AI systems. It has a team of researchers, engineers, and experts working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4915842008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9db01030-d75</externalid>
      <Title>Principal Embedded Software Engineer, EW</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Principal Embedded Software Engineer to join our Electromagnetic Warfare (EW) team. As a lead Haskell engineer, you&#39;ll work with EW leadership to craft our software roadmap, design large-scale systems using functional programming and algebra-driven design principles, and build teams to execute our shared vision.</p>
<p>Your responsibilities will include leading teams of Haskell developers to implement high-performance, high-assurance software systems, participating in EW technology roadmapping, software architecture, and holistic design review processes, and building teams to scale the execution of our proven FP-based software development approach.</p>
<p>To be successful in this role, you&#39;ll need experience building and leading teams dedicated to the functional programming approach, eligibility to obtain and maintain an active U.S. Top Secret SCI security clearance, and the ability to relocate to and work in person in our RF laboratory in Orange County, California.</p>
<p>Preferred qualifications include experience working with typed functional programming languages, such as Haskell, Scala, F#, OCaml, or Rust, experience with MATLAB, especially C code generation, experience with Nix/NixOS, experience with Linux kernel module development, experience with graphics programming, and experience with FPGA development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>Haskell, functional programming, team leadership, software development, security clearance, typed functional programming languages, MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, FPGA development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops software for defence and aerospace applications.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5095386007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45d93304-36a</externalid>
      <Title>Lead Embedded Software Engineer, EW</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Lead Embedded Software Engineer to join our Electromagnetic Warfare (EW) team. As a key member of our team, you&#39;ll work with industry leaders in mechanical, electrical, RF, and FPGA design to deliver the next generation of EW capabilities to our end users. You&#39;ll lead teams of Haskell developers to implement high-performance, high-assurance software systems and participate in EW technology roadmapping, software architecture, and holistic design review processes.</p>
<p>Required qualifications include experience building and leading teams dedicated to the functional programming approach, eligibility to obtain and maintain an active U.S. Top Secret SCI security clearance, and ability to relocate to and work in person in our RF laboratory in Orange County, California.</p>
<p>Preferred qualifications include experience working with typed functional programming languages, experience with MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, and FPGA development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>Haskell, functional programming, team leadership, security clearance, MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, FPGA development, typed functional programming languages, C code generation, OpenGL, DirectX, Vulkan</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops software for electromagnetic warfare and other applications.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5069841007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5daf8f5f-60a</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>Join the Compute Infrastructure team at xAI, responsible for designing, building, and operating massive-scale clusters and orchestration platforms. You will push the boundaries of container orchestration, manage exascale compute resources, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and manage massive-scale clusters to host, persist, train, and serve AI workloads with extreme reliability and performance.</li>
<li>Design, develop, and extend an in-house container orchestration platform that achieves superior scalability, isolation, resource efficiency, and fault-tolerance compared to off-the-shelf solutions.</li>
<li>Collaborate with research teams to architect and optimize compute clusters specifically for large-scale training runs, inference services, and real-time applications.</li>
<li>Profile, debug, and resolve complex system-level performance bottlenecks, resource contention, scheduling issues, and reliability problems across the full stack.</li>
<li>Own end-to-end infrastructure initiatives with first-principles design, rigorous testing, automation, and continuous optimization to support frontier AI compute demands.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent).</li>
<li>Strong proficiency in systems programming languages such as C/C++ and Rust.</li>
<li>Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering.</li>
<li>Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads.</li>
<li>Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale).</li>
<li>Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute.</li>
<li>Familiarity with performance tools, tracing, and debugging in production distributed environments.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>virtualization technologies, advanced containerization/sandboxing, systems programming languages, Linux kernel internals, resource management, scheduling, memory management, low-level engineering, Linux kernel development, hypervisor extensions, low-level system programming, custom runtimes, isolation techniques, bespoke platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8e582153-6af</externalid>
      <Title>Senior DevOps Lead - Cloud &amp; Autonomous System</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>
<p>We are a small company with under 100 employees, operating with the energy of a startup. However, we&#39;re also publicly traded, which means our employees get access to the liquidity of our publicly-traded equity.</p>
<p>As a Senior DevOps Lead at Cyngn, you will play a vital role in architecting and managing infrastructure across cloud and autonomous vehicle systems. This position combines traditional cloud DevOps leadership with specialized expertise in robotics and autonomous systems infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and architect cloud and vehicle infrastructure initiatives across AWS and ROS/Linux environments</li>
<li>Design and implement scalable solutions for both cloud services and autonomous vehicle systems</li>
<li>Establish and maintain DevOps best practices, CI/CD pipelines, and infrastructure as code</li>
<li>Drive observability, monitoring, and incident response strategies</li>
<li>Optimize performance and cost efficiency of cloud and edge computing resources</li>
<li>Mentor team members and foster a developer-friendly environment</li>
<li>Manage on-call rotations and incident response processes</li>
<li>Architect solutions for processing and storing large-scale vehicle telemetry data</li>
<li>Lead security initiatives and compliance efforts across infrastructure</li>
</ul>
<p>Requirements</p>
<ul>
<li>10+ years of relevant DevOps/Infrastructure experience</li>
<li>Proven track record as a technical lead in platform or infrastructure teams</li>
<li>Advanced expertise in AWS services, infrastructure as code (Terraform), and Kubernetes</li>
<li>Strong experience with service mesh (Istio) and Helm/Kustomize</li>
<li>Deep understanding of ROS/ROS2 and Linux kernel configurations</li>
<li>Experience with GPU configurations and ML infrastructure</li>
<li>Expertise in ARM and NVIDIA CUDA platform configurations</li>
<li>Strong programming skills in Python and shell scripting</li>
<li>Experience with infrastructure automation (Ansible)</li>
<li>Expertise in CI/CD tools (Jenkins, GitHub Actions)</li>
<li>Strong system architecture and design skills</li>
<li>Excellence in technical documentation</li>
<li>Outstanding problem-solving abilities</li>
<li>Strong leadership and mentoring capabilities</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with autonomous vehicle systems</li>
<li>Track record of optimizing GPU-based ML infrastructure</li>
<li>Experience with large-scale IoT deployments</li>
<li>Contributions to open-source projects</li>
<li>Experience with real-time systems and low-latency requirements</li>
<li>Expertise in security implementations including SSO, IdP, and AWS Cognito</li>
<li>Experience with JFrog artifactory and container registry management</li>
<li>Proficiency in AWS IoT Greengrass</li>
<li>Experience with container resource management on edge devices</li>
<li>Understanding of CPU affinity and priority scheduling</li>
<li>Track record of implementing cost optimization strategies</li>
<li>Experience with scaling systems both horizontally and vertically</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</li>
<li>Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)</li>
<li>Company 401(k)</li>
<li>Commuter Benefits</li>
<li>Flexible vacation policy</li>
<li>Sabbatical leave opportunity after five years with the company</li>
<li>Paid Parental Leave</li>
<li>Daily lunches for in-office employees</li>
<li>Monthly meal and tech allowances for remote employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,000-225,000 per year</Salaryrange>
      <Skills>AWS services, infrastructure as code (Terraform), Kubernetes, service mesh (Istio), Helm/Kustomize, ROS/ROS2, Linux kernel configurations, GPU configurations, ML infrastructure, ARM, NVIDIA CUDA platform configurations, Python, shell scripting, infrastructure automation (Ansible), CI/CD tools (Jenkins, GitHub Actions), system architecture and design skills, technical documentation, problem-solving abilities, leadership and mentoring capabilities, autonomous vehicle systems, optimizing GPU-based ML infrastructure, large-scale IoT deployments, open-source projects, real-time systems and low-latency requirements, security implementations including SSO, IdP, and AWS Cognito, JFrog artifactory and container registry management, AWS IoT Greengrass, container resource management on edge devices, CPU affinity and priority scheduling, cost optimization strategies, scaling systems both horizontally and vertically</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/1c31b7d8-cf85-472f-9358-1e10189cf815</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1b7bdffa-bff</externalid>
      <Title>Senior Embedded Software Engineer, Android Platform</Title>
      <Description><![CDATA[<p>Redefining computing relies on us creating hardware and software that seamlessly merge virtual reality and the real world. To create this illusion, we are designing and developing completely new ways of using cameras, complex imaging pipelines, and computer vision algorithms.</p>
<p>We are looking for a Senior Embedded Software Engineer to develop firmware for our advanced VR/XR products on the Qualcomm and Android platform, working in an embedded environment. You have experience developing high quality software for resource-constrained systems. You have an understanding of electronics, and are able to navigate your way through schematics.</p>
<p>This is a highly technical space, and you will be working with some of the industry’s leading experts. You do not need to have a background in VR/XR, but you will need to show an interest and aptitude for bridging the gap between customers and deep technology. You can get into the detail of topics, but also to provide clear written and oral synthesis.</p>
<p>Experience building hardware products and/or products that require researching and inventing new technology is preferred. This position is based in Finland. Varjo uses hybrid work mode and you&#39;ll be able to choose either to work at the office or remotely. To effectively perform in this role, we expect that you will need to visit our Helsinki office whenever necessary to work with prototype hardware.</p>
<p><strong>What you’ll be doing</strong></p>
<ul>
<li>Android Platform Bring-up: Port and bring up Android (AOSP) on Qualcomm Snapdragon SoCs, including bootloader configuration, kernel integration, and device tree adaptation for new custom hardware.</li>
</ul>
<ul>
<li>Embedded Software Development: Design, develop, and maintain low-level software for advanced embedded platforms, ensuring optimal performance on tightly constrained systems.</li>
</ul>
<ul>
<li>Hardware Integration: Collaborate with hardware engineers to validate new boards, debug hardware/software interactions, and perform power/performance optimizations.</li>
</ul>
<ul>
<li>Kernel &amp; Driver Development: Customize and integrate Linux kernel drivers, board support packages (BSP), and HAL (Hardware Abstraction Layer) components.</li>
</ul>
<ul>
<li>Debugging &amp; Optimization: Use tools such as JTAG, serial console, and Android-specific debugging utilities to trace issues across the boot chain and runtime environment.</li>
</ul>
<ul>
<li>Cross-Team Collaboration: Partner with other software teams to bridge the gap between hardware and software, ensuring cohesive system functionality.</li>
</ul>
<ul>
<li>Production Support: Support manufacturing and production testing by providing reliable platform-level builds and diagnostics.</li>
</ul>
<p><strong>Our expectations</strong></p>
<ul>
<li>7+ years of software development experience, with 3–5 years focused on Android BSP, platform development, or AOSP bring-up on ARM/Qualcomm SoCs.</li>
</ul>
<ul>
<li>Deep understanding of Android system architecture, from bootloader to framework.</li>
</ul>
<ul>
<li>Strong experience with Linux kernel, device trees, and Qualcomm toolchains.</li>
</ul>
<ul>
<li>Proven capability in debugging at multiple layers: bootloader, kernel, board initialization, and HAL.</li>
</ul>
<ul>
<li>Proficiency with C/C++ and scripting (Python or Bash).</li>
</ul>
<ul>
<li>Familiarity with Git, CI/CD, and embedded development toolchains.</li>
</ul>
<ul>
<li>Experience with Yocto, U-Boot, or other embedded build systems is a plus.</li>
</ul>
<ul>
<li>Exposure to XR/VR or multimedia pipelines on Android is advantageous.</li>
</ul>
<p><strong>Next steps</strong></p>
<p>By joining us, you’ll get:</p>
<ul>
<li>Opportunity to take part in creating the new state-of-the-art in virtual and mixed reality experiences.</li>
</ul>
<ul>
<li>A low-hierarchy culture with minimal bureaucracy and maximum opportunity for you to take charge of your work.</li>
</ul>
<ul>
<li>Flexible working conditions, competitive salary, and great benefits.</li>
</ul>
<ul>
<li>The possibility to select the tools and methods you want to use to do your job effectively.</li>
</ul>
<ul>
<li>An international working environment with tons of opportunities to learn and grow with the company.</li>
</ul>
<p>As we are developing the next computing paradigm, we need a versatile team to help ensure that the new realities are designed for everyone. Our multicultural team consists of talents from all around the world, and our daily working language is English. We believe in the power of diversity – where different experiences, backgrounds, and ideas drive innovation and results.</p>
<p>Even if your profile is not a perfect match but you want to learn and grow, we’d love to hear from you. Ready to jump into the exciting world of VR/XR? Apply now by including your CV and a link to your LinkedIn profile.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Android BSP, Platform development, AOSP bring-up on ARM/Qualcomm SoCs, Linux kernel, Device trees, Qualcomm toolchains, C/C++, Scripting (Python or Bash), Git, CI/CD, Embedded development toolchains, Yocto, U-Boot, Embedded build systems, XR/VR or multimedia pipelines on Android</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Varjo</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Varjo is a leading provider of enterprise virtual and mixed reality solutions, delivering high levels of immersion, performance, and security for industrial customers globally. The company was founded in 2016 and operates in over 40 countries worldwide.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/C71C4BCA96</Applyto>
      <Location>Helsinki</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>173381a1-8d0</externalid>
      <Title>Software Engineer, Sandboxing (Systems)</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>Responsibilities:</strong></p>
<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimising our virtualisation and VM workloads that power our AI infrastructure. Your expertise in low-level system programming, kernel optimisation, and virtualisation technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>
<ul>
<li>Optimise our virtualisation stack, improving performance, reliability, and efficiency of our VM environments</li>
<li>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</li>
<li>Investigate and resolve performance bottlenecks in virtualised environments</li>
<li>Collaborate with cloud engineering teams to optimise interactions between our workloads and underlying hardware</li>
<li>Develop tooling for monitoring and improving virtualisation performance</li>
<li>Work with our ML engineers to understand their computational needs and optimise our systems accordingly</li>
<li>Contribute to the design and implementation of our next-generation compute infrastructure</li>
<li>Share knowledge with team members on low-level systems programming and Linux kernel internals</li>
<li>Partner with cloud providers to influence hardware and platform features for AI workloads</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have experience with Linux kernel development, system programming, or related low-level software engineering</li>
<li>Understand virtualisation technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</li>
<li>Have experience optimising system performance for compute-intensive workloads</li>
<li>Are familiar with modern CPU architectures and memory systems</li>
<li>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</li>
<li>Understand Linux resource management, scheduling, and memory management</li>
<li>Have experience profiling and debugging system-level performance issues</li>
<li>Are comfortable diving into unfamiliar codebases and technical domains</li>
<li>Are results-oriented, with a bias towards practical solutions and measurable impact</li>
<li>Care about the societal impacts of AI and are passionate about building safe, reliable systems</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>GPU virtualisation and acceleration technologies</li>
<li>Cloud infrastructure at scale (AWS, GCP)</li>
<li>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</li>
<li>eBPF programming and kernel tracing tools</li>
<li>OS-level security hardening and isolation techniques</li>
<li>Developing custom scheduling algorithms for specialised workloads</li>
<li>Performance optimisation for ML/AI specific workloads</li>
<li>Network stack optimisation and high-performance networking</li>
<li>Experience with TPUs, custom ASICs, or other ML accelerators</li>
</ul>
<p><strong>Representative projects:</strong></p>
<ul>
<li>Optimising kernel parameters and VM configurations to reduce inference latency for large language models</li>
<li>Implementing custom memory management schemes for large-scale distributed training</li>
<li>Developing specialised I/O schedulers to prioritise ML workloads</li>
<li>Creating lightweight virtualisation solutions tailored for AI inference</li>
<li>Building monitoring and instrumentation tools to identify system-level bottlenecks</li>
<li>Enhancing communication between VMs for distributed training workloads</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong></p>
<p>We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong></p>
<p>Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong></p>
<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about the authenticity of an email or a request, please reach out to us directly.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>Linux kernel development, System programming, Low-level software engineering, Virtualisation technologies, Kernel optimisation, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualisation, Cloud infrastructure, Container technologies, eBPF programming, OS-level security hardening, Custom scheduling algorithms, Performance optimisation, Network stack optimisation, TPUs, Custom ASICs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. Its team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025591008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>139cd1f4-231</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>At Anthropic, we are building some of the most complex and large-scale AI infrastructure in the world. As that infrastructure scales rapidly, so does the imperative to optimise how we use it. As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable—without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimisation frameworks that ensure every dollar of our infrastructure investment delivers maximum value. This is a high-impact, cross-functional role at the intersection of systems engineering, financial optimisation, and AI infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilisation, and costs across our cloud and datacentre fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimise their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimise cluster configurations, workload placement, and resource utilisation across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimisations across multiple services and platforms to deliver measurable utilisation and performance gains.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimising end-to-end performance of distributed systems, including workload right-sizing and resource utilisation tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilisation monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills—you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimisation and scaling distributed systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, performance optimisation, scalability, continuous improvement, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, systems design tradeoffs, published work in performance optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation building some of the most complex and large-scale AI infrastructure in the world.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>3b20b513-ea1</externalid>
      <Title>Staff+ Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organisation is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p>_Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams._</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>
<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>
<li>Define infrastructure architecture, ensuring the hardest problems get solved — whether by you directly or by working through others</li>
<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>
<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years of software engineering experience</li>
<li>Have led complex, multi-quarter technical initiatives that span multiple teams or systems</li>
<li>Can set technical direction for a team, not just execute within it</li>
<li>Have deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>
<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Security and privacy best practice expertise</li>
<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
<li>Technical expertise: Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<p>_Deadline to apply: None. Applications will be reviewed on a rolling basis._</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This re</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, Python, Rust, Go, Java, security and privacy best practice expertise, machine learning infrastructure, GPUs, TPUs, Trainium, NCCL, low level systems experience, linux kernel tuning, eBPF</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It is a group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108817008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>886a66bf-10d</externalid>
      <Title>Senior Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organisation is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p>_Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams._</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead infrastructure projects from design through delivery, owning scope, execution, and outcomes</li>
<li>Build and maintain systems that support AI clusters at massive scale (thousands to hundreds of thousands of machines)</li>
<li>Partner with cloud providers and internal teams to solve compute, networking, and reliability challenges</li>
<li>Tackle difficult technical problems in your domain and proactively fill gaps in tooling, documentation, and processes</li>
<li>Contribute to operational practices including incident response, postmortems, and on-call rotations</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of software engineering experience</li>
<li>Have led technical projects end-to-end over multiple months, including scoping, breaking down work, and driving delivery</li>
<li>Have deep knowledge of distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Solve hard problems independently and know when to pull others in</li>
<li>Help teammates grow through knowledge sharing and thoughtful technical guidance</li>
<li>Communicate clearly in design docs, presentations, and cross-functional discussions</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Security and privacy best practice expertise</li>
<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
<li>Technical expertise: Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<p>_Deadline to apply: None. Applications will be reviewed on a rolling basis._</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000 - £325,000GBP</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, Python, Rust, Go, Java, security and privacy best practice expertise, machine learning infrastructure, GPUs, TPUs, Trainium, NCCL, low level systems experience, linux kernel tuning, eBPF</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation developing AI systems that are reliable, interpretable, and steerable. Its mission is to create safe and beneficial AI systems for users and society.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4915842008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>cb9e2dd0-6da</externalid>
      <Title>Linux Kernels Software Lead</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Linux Kernels Software Lead</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$342K – $555K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Scaling team builds and optimizes large-scale infrastructure to enable next-generation AI workloads.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a founding/lead Linux kernel developer to join our Scaling team. In this role, you’ll design and develop Linux kernel components, working at the intersection of hardware and software to unlock performance at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead and bootstrap the development of our Linux kernel stack to support high-performance systems.</li>
</ul>
<ul>
<li>Design and implement kernel drivers, including for functionality related to DMA, PCIe, NICs, and RDMA.</li>
</ul>
<ul>
<li>Drive end-to-end development of system-scale networking, including required kernel and other low-level software.</li>
</ul>
<ul>
<li>Collaborate with vendors to integrate their technologies within our systems.</li>
</ul>
<ul>
<li>Bring up and debug the kernel on new platforms.</li>
</ul>
<ul>
<li>Build userspace software to support integration, testing, diagnostics, and performance validation.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Proven experience leading development within the Linux kernel.</li>
</ul>
<ul>
<li>Deep knowledge of subsystems relevant to high-performance systems: PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, etc.</li>
</ul>
<ul>
<li>Knowledge of subsystems and frameworks related to scale-out networking: ibverbs, ECN/DCQCN, etc.</li>
</ul>
<ul>
<li>Strong programming skills in C, C++, Python, and Linux shell scripting; Rust experience is a strong plus.</li>
</ul>
<ul>
<li>Experience working directly with engineering teams to define interfaces and tooling.</li>
</ul>
<ul>
<li>Track record of managing vendor deliverables and technical relationships.</li>
</ul>
<ul>
<li>Background in embedded systems development (bootloaders, drivers, hardware/software integration).</li>
</ul>
<ul>
<li>Ability to thrive in ambiguity and build systems from scratch.</li>
</ul>
<p>_To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations._</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$342K – $555K • Offers Equity</Salaryrange>
      <Skills>Linux kernel development, C, C++, Python, Linux shell scripting, Rust, PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, ibverbs, ECN/DCQCN, Embedded systems development, Bootloaders, Drivers, Hardware/software integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e5691162-4e45-4dc6-a6bf-64f60ebf1ac4</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a6f2cc66-67b</externalid>
      <Title>Networking Operating System Firmware Engineer</Title>
      <Description><![CDATA[<p><strong>Networking Operating System Firmware Engineer</strong></p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>
<p><strong>About the Role</strong></p>
<p>We’re seeking a Networking Operating System Firmware Engineer to help bootstrap and scale the switching layer of our AI supercomputers. In this role, you’ll build and maintain custom SONiC NOS images from scratch, working across the Linux kernel, switch ASIC SAI/SDKs, platform drivers, control-plane services, and orchestration layers.</p>
<p>You will validate, configure, and optimize switch platforms used across our high-bandwidth cluster fabric, ensuring performance, reliability, availability, and seamless integration with fleet automation. You’ll collaborate with hardware and systems teams and guide vendors to meet stringent technical expectations.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, develop, and maintain custom SONiC NOS images for large-scale bleeding-edge AI fabrics.</li>
</ul>
<ul>
<li>Integrate and configure Linux kernel components, device drivers, switch ASIC SDKs, and SAI layers.</li>
</ul>
<ul>
<li>Bring up new switch platforms (thermal/fan control, power monitoring, transceiver management, watchdogs, OSFP CMIS, LEDs, CPLDs, etc.).</li>
</ul>
<ul>
<li>Extend and customize SONiC services for routing, telemetry, control-plane state, and distributed automation.</li>
</ul>
<ul>
<li>Work with hardware teams to validate ASIC configurations, link bring-up, SerDes tuning, buffer profiles, and performance baselines.</li>
</ul>
<ul>
<li>Evaluate switch silicon SDK releases, track vendor deliverables, and define platform requirements with vendors and ASIC partners.</li>
</ul>
<ul>
<li>Debug complex issues spanning kernel, platform drivers, SONiC dockers, routing agents, orchestration services, hardware signals, and network topology.</li>
</ul>
<ul>
<li>Integrate switches into fleet-wide monitoring, remote diagnostics, telemetry pipelines, and automated lifecycle workflows.</li>
</ul>
<ul>
<li>Develop robust CI/build pipelines for reproducible NOS builds and controlled rollout across the fleet.</li>
</ul>
<ul>
<li>Support factory bring-up and qualification all the way through mass deployment.</li>
</ul>
<ul>
<li>Collaborate, architect, implement, and deploy novel networking protocols and technologies to achieve maximum performance and reliability at AI factory scale.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Proven experience working with SONiC or comparable NOS stacks (FBOSS, Cumulus Linux, Arista EOS, Junos PFE-level integration, etc.).</li>
</ul>
<ul>
<li>Experience with updating OpenConfig gNMI interfaces and YANG data models.</li>
</ul>
<ul>
<li>Strong background in Linux kernel, network device drivers, and low-level OS internals.</li>
</ul>
<ul>
<li>Experience integrating Broadcom / Marvell / NVIDIA / Intel ASIC SDKs and SAI implementations.</li>
</ul>
<ul>
<li>Proficiency in C, C++ and Python; familiarity with Rust/Go is a plus.</li>
</ul>
<ul>
<li>Deep understanding of L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, and telemetry.</li>
</ul>
<ul>
<li>Hands-on experience with hardware platform bring-up and board-level debugging.</li>
</ul>
<ul>
<li>Familiarity with CI/CD pipelines, distributed config/state management, and large-scale automation.</li>
</ul>
<ul>
<li>Strong cross-functional problem solving in high-performance, distributed environments.</li>
</ul>
<ul>
<li>Ability to lead teams to deliver a project end to end.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$266K – $445K</Salaryrange>
      <Skills>SONiC, Linux kernel, network device drivers, low-level OS internals, C, C++, Python, Rust/Go, L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, telemetry, OpenConfig gNMI interfaces, YANG data models, Broadcom / Marvell / NVIDIA / Intel ASIC SDKs, SAI implementations, CI/CD pipelines, distributed config/state management, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all. It is a privately held company with a large team of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/582b878e-61bf-4be2-8b30-623434baf726</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1ef31769-74d</externalid>
      <Title>Software Engineer, Fleet Management</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Fleet Management</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Role</strong></p>
<p>The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build systems to manage both cloud and bare-metal fleets at scale.</li>
</ul>
<ul>
<li>Develop tools that integrate low-level hardware metrics with high-level job scheduling and cluster management algorithms.</li>
</ul>
<ul>
<li>Leverage LLMs to coordinate vendor operations and optimize infrastructure workflows.</li>
</ul>
<ul>
<li>Automate infrastructure processes, reducing repetitive toil and improving system reliability.</li>
</ul>
<ul>
<li>Collaborate with hardware, infrastructure, and research teams to ensure seamless integration across the stack.</li>
</ul>
<ul>
<li>Continuously improve tools, automation, processes, and documentation to enhance operational efficiency.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have strong software engineering skills with experience in large-scale infrastructure environments.</li>
</ul>
<ul>
<li>Possess broad knowledge of cluster-level systems (e.g., Kubernetes, CI/CD pipelines, Terraform, cloud providers).</li>
</ul>
<ul>
<li>Have deep expertise in server-level systems (e.g., systems, containerization, Chef, Linux kernels, firmware management, host routing).</li>
</ul>
<ul>
<li>Are passionate about optimizing the performance and reliability of large compute fleets.</li>
</ul>
<ul>
<li>Thrive in dynamic environments and are eager to solve complex infrastructure challenges.</li>
</ul>
<ul>
<li>Value automation, efficiency, and continuous improvement in everything you build.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $490K</Salaryrange>
      <Skills>software engineering, large-scale infrastructure environments, cluster-level systems, server-level systems, LLMs, infrastructure workflows, automation, operational efficiency, Kubernetes, CI/CD pipelines, Terraform, cloud providers, Chef, Linux kernels, firmware management, host routing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/7809102e-e82a-4678-bf7c-221de8acc0d6</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>37a117ac-7f2</externalid>
      <Title>Embedded SWE, Consumer Devices</Title>
      <Description><![CDATA[<p><strong>Embedded SWE, Consumer Devices</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The <strong>Software Engineering</strong> <strong>Embedded</strong> team builds reliable, high-performance systems on custom hardware. We work closely with hardware engineers to design, optimize, and ship software that bridges cutting-edge devices and real-world constraints like memory, power, and latency. Our work spans early prototyping through product launch, ensuring that our embedded platforms are robust, efficient, and production-ready.</p>
<p><strong>About the Role</strong></p>
<p>As an <strong>Embedded Software Engineer</strong>, you will design, implement, and debug software for embedded devices. You’ll own low-level bring-up, write production C/C++ code, and partner closely with hardware teams to deliver reliable, high-performance systems.</p>
<p>We’re looking for engineers with deep embedded expertise, strong debugging skills, and a passion for building systems that perform under real-world conditions.</p>
<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model</strong> of four days in the office per week and offer <strong>relocation assistance</strong> to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, implement, and debug software for embedded devices.</li>
</ul>
<ul>
<li>Contribute to defining software requirements, interfaces, and test plans.</li>
</ul>
<ul>
<li>Bring up and debug new boards.</li>
</ul>
<ul>
<li>Analyze performance, memory, and power profiles and implement optimizations.</li>
</ul>
<ul>
<li>Investigate field issues, perform root-cause analysis, and deliver robust fixes.</li>
</ul>
<ul>
<li>Foster good software engineering practices.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have deep experience shipping embedded systems (around 10+ years).</li>
</ul>
<ul>
<li>Are proficient in C and C++.</li>
</ul>
<ul>
<li>Are familiar with embedded toolchains, operating systems, and debugging tools.</li>
</ul>
<ul>
<li>Have experience with both rapid prototyping and scalable product development.</li>
</ul>
<ul>
<li>(Nice to have) Have experience with Zephyr RTOS.</li>
</ul>
<ul>
<li>(Nice to have) Have worked with networking/wireless stacks (BLE, Wi-Fi).</li>
</ul>
<ul>
<li>(Nice to have) Have experience with robotic system bring-up or Linux kernel development.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $325K • Offers Equity</Salaryrange>
      <Skills>C, C++, Embedded toolchains, Operating systems, Debugging tools, Rapid prototyping, Scalable product development, Zephyr RTOS, Networking/wireless stacks, Robotic system bring-up, Linux kernel development, Embedded expertise, Strong debugging skills, Passion for building systems that perform under real-world conditions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a company that pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2710d0c7-8f1c-4e1a-bf7a-4000fc5a8d68</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>356892b1-542</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Suzhou office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking an expert Senior GPU Engineer to join our AI Infrastructure team. In this role, you will architect and optimize the core inference engine that powers our large-scale AI models. You will be responsible for pushing the boundaries of hardware performance, reducing latency, and maximizing throughput for Generative AI and Deep Learning workloads. You will work at the intersection of Deep Learning algorithms and low-level hardware, designing custom operators and building a highly efficient training/inference execution engine from the ground up.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Custom Operator Development: Design and implement highly optimized GPU kernels (CUDA/Triton) for critical deep learning operations (e.g., FlashAttention, GEMM, LayerNorm) to outperform standard libraries.</li>
<li>Inference Engine Architecture: Contribute to the development of our high-performance inference engine, focusing on graph optimizations, operator fusion, and dynamic memory management (e.g., KV Cache optimization).</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Expertise in the CUDA programming model and NVIDIA GPU architectures (specifically Ampere/Hopper).</li>
<li>Deep understanding of the memory hierarchy (Shared Memory, L2 cache, Registers), warp-level primitives, occupancy optimization, and bank conflict resolution.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Proven ability to navigate and modify complex, large-scale codebases (e.g., PyTorch internals, Linux kernel).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Starting January 26, 2026, Microsoft AI employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>
<li>Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or protected veteran status.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, CUDA, NVIDIA GPU architectures, Deep Learning algorithms, low-level hardware, PyTorch, Linux kernel</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-18/</Applyto>
      <Location>Suzhou</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>