{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/kernel-development"},"x-facet":{"type":"skill","slug":"kernel-development","display":"Kernel Development","count":12},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_24176cb8-311"},"title":"Member of Technical Staff - Compute Infrastructure","description":"<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>\n<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>\n<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>\n<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>\n<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_24176cb8-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052040007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent)","Strong proficiency in systems programming languages such as C/C++ and Rust","Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering","Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale"],"x-skills-preferred":["Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads","Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale)","Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute","Familiarity with performance tools, tracing, and debugging in production distributed environments"],"datePosted":"2026-04-18T15:55:50.213Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d4292d1-227"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>\n<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<p>Responsibilities:</p>\n<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>\n<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>\n<p>Investigate and resolve performance bottlenecks in virtualized environments</p>\n<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>\n<p>Develop tooling for monitoring and improving virtualization performance</p>\n<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>\n<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>\n<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>\n<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>\n<p>You may be a good fit if you:</p>\n<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>\n<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>\n<p>Have experience optimizing system performance for compute-intensive workloads</p>\n<p>Are familiar with modern CPU architectures and memory systems</p>\n<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>\n<p>Understand Linux resource management, scheduling, and memory management</p>\n<p>Have experience profiling and debugging system-level performance issues</p>\n<p>Are comfortable diving into unfamiliar codebases and technical domains</p>\n<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>\n<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>\n<p>Strong candidates may also have experience with:</p>\n<p>GPU virtualization and acceleration technologies</p>\n<p>Cloud infrastructure at scale (AWS, GCP)</p>\n<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>\n<p>eBPF programming and kernel tracing tools</p>\n<p>OS-level security hardening and isolation techniques</p>\n<p>Developing custom scheduling algorithms for specialized workloads</p>\n<p>Performance optimization for ML/AI specific workloads</p>\n<p>Network stack optimization and high-performance networking</p>\n<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>\n<p>Representative projects:</p>\n<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>\n<p>Implementing custom memory management schemes for large-scale distributed training</p>\n<p>Developing specialized I/O schedulers to prioritize ML workloads</p>\n<p>Creating lightweight virtualization solutions tailored for AI inference</p>\n<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>\n<p>Enhancing communication between VMs for distributed training workloads</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d4292d1-227","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Linux kernel development","System programming","Virtualization technologies","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualization","Cloud infrastructure","Container technologies","eBPF programming","Kernel tracing tools","OS-level security hardening","Custom scheduling algorithms","Performance optimization for ML/AI","Network stack optimization"],"datePosted":"2026-04-18T15:55:40.026Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5daf8f5f-60a"},"title":"Member of Technical Staff - Compute Infrastructure","description":"<p>Join the Compute Infrastructure team at xAI, responsible for designing, building, and operating massive-scale clusters and orchestration platforms. You will push the boundaries of container orchestration, manage exascale compute resources, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and manage massive-scale clusters to host, persist, train, and serve AI workloads with extreme reliability and performance.</li>\n<li>Design, develop, and extend an in-house container orchestration platform that achieves superior scalability, isolation, resource efficiency, and fault-tolerance compared to off-the-shelf solutions.</li>\n<li>Collaborate with research teams to architect and optimize compute clusters specifically for large-scale training runs, inference services, and real-time applications.</li>\n<li>Profile, debug, and resolve complex system-level performance bottlenecks, resource contention, scheduling issues, and reliability problems across the full stack.</li>\n<li>Own end-to-end infrastructure initiatives with first-principles design, rigorous testing, automation, and continuous optimization to support frontier AI compute demands.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent).</li>\n<li>Strong proficiency in systems programming languages such as C/C++ and Rust.</li>\n<li>Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering.</li>\n<li>Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads.</li>\n<li>Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale).</li>\n<li>Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute.</li>\n<li>Familiarity with performance tools, tracing, and debugging in production distributed environments.</li>\n</ul>\n<p>Compensation and Benefits:</p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5daf8f5f-60a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052040007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["virtualization technologies","advanced containerization/sandboxing","systems programming languages","Linux kernel internals","resource management","scheduling","memory management","low-level engineering"],"x-skills-preferred":["Linux kernel development","hypervisor extensions","low-level system programming","custom runtimes","isolation techniques","bespoke platforms"],"datePosted":"2026-04-18T15:39:56.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"virtualization technologies, advanced containerization/sandboxing, systems programming languages, Linux kernel internals, resource management, scheduling, memory management, low-level engineering, Linux kernel development, hypervisor extensions, low-level system programming, custom runtimes, isolation techniques, bespoke platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d9d8eb9-e9d"},"title":"Senior Software Engineer, Embedded Applications","description":"<p>The VBAT Software team at Shield AI is seeking a Senior Software Engineer to develop complex avionics software for cutting-edge Unmanned Aerial Vehicles (UAV). As a member of the team, you will develop and maintain software architectures, generate and maintain software requirements, document and present software designs, coordinate software development, and marshal the entire suite of VBAT software through test and verification, release, and deployment to production and customers.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing high-quality C/C++ code tailored specifically for V-Bat aircraft, ensuring optimal performance, reliability, and safety.</li>\n<li>Participating in architecture, design, and code reviews</li>\n<li>Leading cross-functional teams to create systems of software features to implement advanced robotic avionics capabilities</li>\n<li>Integrating software from multiple departments to include firmware, software test and verification, Autonomy AI, and Ground Control Stations (GCS)</li>\n<li>Developing software systems to implement and integrate interfaces to modern avionics sensors, sub-systems, and payloads</li>\n<li>Facilitating the design process for updates to the software system architecture</li>\n<li>Using modern software development tools and processes to capture our existing architecture and design future architectures</li>\n<li>Collaborating to define and extend systems engineering processes</li>\n<li>Reporting status, risks, accomplishments, expectations to senior leadership</li>\n<li>Working with the V-Bat production teams to manufacture UAVs in-house.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Demonstrated track record of assuming ownership over development processes and features and delivering outstanding outcomes</li>\n<li>Proven track record of successfully shipping products, showcasing the ability to navigate through development cycles, overcome obstacles, and deliver high-quality solutions to meet project deadlines and exceed client expectations in a fast-paced environment</li>\n<li>Proactively identifying opportunities for improvement within software development projects, demonstrating initiative to propose and implement innovative solutions that enhance efficiency, quality, and overall project success and V-Bat reliability</li>\n<li>B.S., M.S, PhD degree in Systems Engineering, Software Engineering, Computer Science or STEM (Science, Technology, Engineering, or Mathematics) discipline, such as Aerospace, Mechanical, or Electrical Engineering</li>\n<li>Strong embedded software development experience in C/C+</li>\n<li>Strong knowledge of embedded software, kernel development, BSPs or other systems software components</li>\n<li>Good understanding of computer architecture, operating systems, and network protocols fundamentals</li>\n<li>Experience producing high-quality technical documentation, including architecture, detailed designs, and test plans</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d9d8eb9-e9d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/6bb0bc83-a790-4633-b872-ca062ed9d1e7","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,410 - $249,616 a year","x-skills-required":["C/C++","Embedded software development","Kernel development","BSPs","Computer architecture","Operating systems","Network protocols","Technical documentation"],"x-skills-preferred":["Real Time Operating System (RTOS)","Autonomous robotic systems","Fast-paced environments","Startup or R&D settings"],"datePosted":"2026-04-17T13:05:20.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, Embedded software development, Kernel development, BSPs, Computer architecture, Operating systems, Network protocols, Technical documentation, Real Time Operating System (RTOS), Autonomous robotic systems, Fast-paced environments, Startup or R&D settings","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166410,"maxValue":249616,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2bc207d0-89b"},"title":"Senior Machine Learning Engineer","description":"<p>We are seeking a Senior Machine Learning Research Engineer to join the Machine Learning Science (MLS) team, within the Computational Science department. The ideal candidate has a strong knowledge in designing and building deep learning (DL) pipelines, and expertise in creating reliable, scalable artificial intelligence/machine learning (AI/ML) systems in a cloud environment.</p>\n<p>The MLS team at Freenome develops DL models using massive-scale genomic data that presents significant challenges for current training paradigms. The Senior Machine Learning Research Engineer will primarily be responsible for developing and deploying the infrastructure needed to support development of such DL models: enabling distributed DL pipelines, optimising hardware utilisation for efficient training, and performing model optimisations.</p>\n<p>As part of an interdisciplinary R&amp;D team, they will work in close collaboration with machine learning scientists, computational biologists and software engineers to accelerate the development of state-of-the-art ML/AI models and help Freenome achieve its mission.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Implementing and refining DL pipelines on distributed computing platforms to enhance the speed and efficiency of DL operations, including model training, data handling, model management, and inference.</li>\n<li>Collaborating closely with ML scientists and software engineers to understand current challenges and requirements and ensure that the DL model development pipelines created are perfectly aligned with scientific goals and operational needs.</li>\n<li>Continuously monitoring, evaluating, and optimising DL model training pipelines for performance and scalability.</li>\n<li>Staying up to date with the latest advancements in AI, ML, and related technologies, and quickly learning and adapting new tools and frameworks, if necessary.</li>\n<li>Developing and maintaining robust and reproducible DL pipelines that guarantee that DL pipelines can be reliably executed, maintaining consistency and accuracy of results.</li>\n<li>Driving performance improvements across our stack through profiling, optimisation, and benchmarking. Implementing efficient caching solutions and debugging distributed systems to accelerate both training and evaluation pipelines.</li>\n<li>Acting as a bridge facilitating communication between the engineering and scientific teams, documenting and sharing best practices to foster a culture of learning and continuous improvement.</li>\n</ul>\n<p>Must-haves include:</p>\n<ul>\n<li>MS or equivalent experience in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Software Engineering, with an emphasis on AI/ML theory and/or practical development.</li>\n<li>5+ years of post-MS industry experience working on developing AI/ML software engineering pipelines.</li>\n<li>Proficiency in a general-purpose programming language: Python (preferred), Java, Julia, C, C++, etc.</li>\n<li>Strong knowledge of ML and DL fundamentals and hands-on experience with machine learning frameworks such as PyTorch, TensorFlow, Jax or Scikit-learn.</li>\n<li>In-depth knowledge of scalable and distributed computing platforms that support complex model training (such as Ray or DeepSpeed) and their integration with ML developer tools like TensorBoard, Wandb, or MLflow.</li>\n<li>Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and how to deploy and manage AI/ML models and pipelines in a cloud environment.</li>\n<li>Understanding of containerisation technologies (e.g., Docker) and computing resource orchestration tools (e.g., Kubernetes) for deploying scalable ML/AI solutions.</li>\n<li>Proven track record of developing and optimising workflows for training DL models, large language models (LLMs), or similar for problems with high data complexity and volume.</li>\n<li>Experience managing large datasets, including data storage (such as HDFS or Parquet on S3), retrieval, and efficient data processing techniques (via libraries and executors such as PyArrow and Spark).</li>\n<li>Proficiency in version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices to maintain code quality and automate development workflows.</li>\n<li>Expertise in building and launching large-scale ML frameworks in a scientific environment that supports the needs of a research team.</li>\n<li>Excellent ability to work effectively with cross-functional teams and communicate across disciplines.</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Experience working with large-scale genomics or biological datasets.</li>\n<li>Experience managing multimodal datasets, such as combinations of sequence, text, image, and other data.</li>\n<li>Experience GPU/Accelerator programming and kernel development (such as CUDA, Triton or XLA).</li>\n<li>Experience with infrastructure-as-code and configuration management.</li>\n<li>Experience cultivating MLOps and ML infrastructure best practices, especially around reliability, provisioning and monitoring.</li>\n<li>Strong track record of contributions to relevant DL projects, e.g. on github.</li>\n</ul>\n<p>The US target range of our base salary for new hires is $161,925 - $227,325. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered.</p>\n<p>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2bc207d0-89b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Freenome","sameAs":"https://freenome.com/","logo":"https://logos.yubhub.co/freenome.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/freenome/jobs/8013673002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$161,925 - $227,325","x-skills-required":["Python","Java","Julia","C","C++","PyTorch","TensorFlow","Jax","Scikit-learn","Ray","DeepSpeed","TensorBoard","Wandb","MLflow","AWS","Google Cloud","Azure","Docker","Kubernetes","Git","Continuous Integration/Continuous Deployment"],"x-skills-preferred":["Large-scale genomics or biological datasets","Multimodal datasets","GPU/Accelerator programming and kernel development","Infrastructure-as-code and configuration management","MLOps and ML infrastructure best practices"],"datePosted":"2026-04-17T12:35:01.240Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brisbane, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Julia, C, C++, PyTorch, TensorFlow, Jax, Scikit-learn, Ray, DeepSpeed, TensorBoard, Wandb, MLflow, AWS, Google Cloud, Azure, Docker, Kubernetes, Git, Continuous Integration/Continuous Deployment, Large-scale genomics or biological datasets, Multimodal datasets, GPU/Accelerator programming and kernel development, Infrastructure-as-code and configuration management, MLOps and ML infrastructure best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":161925,"maxValue":227325,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_173381a1-8d0"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimising our virtualisation and VM workloads that power our AI infrastructure. Your expertise in low-level system programming, kernel optimisation, and virtualisation technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<ul>\n<li>Optimise our virtualisation stack, improving performance, reliability, and efficiency of our VM environments</li>\n<li>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</li>\n<li>Investigate and resolve performance bottlenecks in virtualised environments</li>\n<li>Collaborate with cloud engineering teams to optimise interactions between our workloads and underlying hardware</li>\n<li>Develop tooling for monitoring and improving virtualisation performance</li>\n<li>Work with our ML engineers to understand their computational needs and optimise our systems accordingly</li>\n<li>Contribute to the design and implementation of our next-generation compute infrastructure</li>\n<li>Share knowledge with team members on low-level systems programming and Linux kernel internals</li>\n<li>Partner with cloud providers to influence hardware and platform features for AI workloads</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have experience with Linux kernel development, system programming, or related low-level software engineering</li>\n<li>Understand virtualisation technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</li>\n<li>Have experience optimising system performance for compute-intensive workloads</li>\n<li>Are familiar with modern CPU architectures and memory systems</li>\n<li>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</li>\n<li>Understand Linux resource management, scheduling, and memory management</li>\n<li>Have experience profiling and debugging system-level performance issues</li>\n<li>Are comfortable diving into unfamiliar codebases and technical domains</li>\n<li>Are results-oriented, with a bias towards practical solutions and measurable impact</li>\n<li>Care about the societal impacts of AI and are passionate about building safe, reliable systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU virtualisation and acceleration technologies</li>\n<li>Cloud infrastructure at scale (AWS, GCP)</li>\n<li>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</li>\n<li>eBPF programming and kernel tracing tools</li>\n<li>OS-level security hardening and isolation techniques</li>\n<li>Developing custom scheduling algorithms for specialised workloads</li>\n<li>Performance optimisation for ML/AI specific workloads</li>\n<li>Network stack optimisation and high-performance networking</li>\n<li>Experience with TPUs, custom ASICs, or other ML accelerators</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Optimising kernel parameters and VM configurations to reduce inference latency for large language models</li>\n<li>Implementing custom memory management schemes for large-scale distributed training</li>\n<li>Developing specialised I/O schedulers to prioritise ML workloads</li>\n<li>Creating lightweight virtualisation solutions tailored for AI inference</li>\n<li>Building monitoring and instrumentation tools to identify system-level bottlenecks</li>\n<li>Enhancing communication between VMs for distributed training workloads</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong></p>\n<p>We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong></p>\n<p>Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about the authenticity of an email or a request, please reach out to us directly.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_173381a1-8d0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000 - $405,000 USD","x-skills-required":["Linux kernel development","System programming","Low-level software engineering","Virtualisation technologies","Kernel optimisation","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualisation","Cloud infrastructure","Container technologies","eBPF programming","OS-level security hardening","Custom scheduling algorithms","Performance optimisation","Network stack optimisation","TPUs","Custom ASICs"],"datePosted":"2026-03-08T14:03:08.579Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Low-level software engineering, Virtualisation technologies, Kernel optimisation, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualisation, Cloud infrastructure, Container technologies, eBPF programming, OS-level security hardening, Custom scheduling algorithms, Performance optimisation, Network stack optimisation, TPUs, Custom ASICs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11a60d5a-f54"},"title":"Performance Engineer, GPU","description":"<p><strong>About the role:</strong></p>\n<p>Pioneering the next generation of AI requires breakthrough innovations in GPU performance and systems engineering. As a GPU Performance Engineer, you&#39;ll architect and implement the foundational systems that power Claude and push the frontiers of what&#39;s possible with large language models. You&#39;ll be responsible for maximizing GPU utilization and performance at unprecedented scale, developing cutting-edge optimizations that directly enable new model capabilities and dramatically improve inference efficiency.</p>\n<p>Working at the intersection of hardware and software, you&#39;ll implement state-of-the-art techniques from custom kernel development to distributed system architectures. Your work will span the entire stack—from low-level tensor core optimizations to orchestrating thousands of GPUs in perfect synchronization.</p>\n<p>Strong candidates will have a track record of delivering transformative GPU performance improvements in production ML systems and will be excited to shape the future of AI infrastructure alongside world-class researchers and engineers.</p>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have deep experience with GPU programming and optimization at scale</li>\n<li>Are impact-driven, passionate about delivering measurable performance breakthroughs</li>\n<li>Can navigate complex systems from hardware interfaces to high-level ML frameworks</li>\n<li>Enjoy collaborative problem-solving and pair programming</li>\n<li>Want to work on state-of-the-art language models with real-world impact</li>\n<li>Care about the societal impacts of your work</li>\n<li>Thrive in ambiguous environments where you define the path forward</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU Kernel Development: CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization</li>\n<li>ML Compilers &amp; Frameworks: PyTorch/JAX internals, torch.compile, XLA, custom operators</li>\n<li>Performance Engineering: Kernel fusion, memory bandwidth optimization, profiling with Nsight</li>\n<li>Distributed Systems: NCCL, NVLink, collective communication, model parallelism</li>\n<li>Low-Precision: INT8/FP8 quantization, mixed-precision techniques</li>\n<li>Production Systems: Large-scale training infrastructure, fault tolerance, cluster orchestration</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Co-design attention mechanisms and algorithms for next-generation hardware architectures</li>\n<li>Develop custom kernels for emerging quantization formats and mixed-precision techniques</li>\n<li>Design distributed communication strategies for multi-node GPU clusters</li>\n<li>Optimize end-to-end training and inference pipelines for frontier language models</li>\n<li>Build performance modeling frameworks to predict and optimize GPU utilization</li>\n<li>Implement kernel fusion strategies to minimize memory bandwidth bottlenecks</li>\n<li>Create resilient systems for planet-scale distributed training infrastructure</li>\n<li>Profile and eliminate performance bottlenecks in production serving infrastructure</li>\n<li>Partner with hardware vendors to influence future accelerator capabilities and software stacks</li>\n</ul>\n<p><strong>Deadline to apply:</strong> None. Applications will be reviewed on a rolling basis.</p>\n<p>The expected salary range for this position is:</p>\n<p>Annual Salary: $280,000 - $850,000USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11a60d5a-f54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4926227008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000 - $850,000USD","x-skills-required":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"x-skills-preferred":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"datePosted":"2026-03-08T13:45:05.412Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration, GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb9e2dd0-6da"},"title":"Linux Kernels Software Lead","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Linux Kernels Software Lead</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$342K – $555K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Scaling team builds and optimizes large-scale infrastructure to enable next-generation AI workloads.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a founding/lead Linux kernel developer to join our Scaling team. In this role, you’ll design and develop Linux kernel components, working at the intersection of hardware and software to unlock performance at scale.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead and bootstrap the development of our Linux kernel stack to support high-performance systems.</li>\n</ul>\n<ul>\n<li>Design and implement kernel drivers, including for functionality related to DMA, PCIe, NICs, and RDMA.</li>\n</ul>\n<ul>\n<li>Drive end-to-end development of system-scale networking, including required kernel and other low-level software.</li>\n</ul>\n<ul>\n<li>Collaborate with vendors to integrate their technologies within our systems.</li>\n</ul>\n<ul>\n<li>Bring up and debug the kernel on new platforms.</li>\n</ul>\n<ul>\n<li>Build userspace software to support integration, testing, diagnostics, and performance validation.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Proven experience leading development within the Linux kernel.</li>\n</ul>\n<ul>\n<li>Deep knowledge of subsystems relevant to high-performance systems: PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, etc.</li>\n</ul>\n<ul>\n<li>Knowledge of subsystems and frameworks related to scale-out networking: ibverbs, ECN/DCQCN, etc.</li>\n</ul>\n<ul>\n<li>Strong programming skills in C, C++, Python, and Linux shell scripting; Rust experience is a strong plus.</li>\n</ul>\n<ul>\n<li>Experience working directly with engineering teams to define interfaces and tooling.</li>\n</ul>\n<ul>\n<li>Track record of managing vendor deliverables and technical relationships.</li>\n</ul>\n<ul>\n<li>Background in embedded systems development (bootloaders, drivers, hardware/software integration).</li>\n</ul>\n<ul>\n<li>Ability to thrive in ambiguity and build systems from scratch.</li>\n</ul>\n<p>_To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations._</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb9e2dd0-6da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e5691162-4e45-4dc6-a6bf-64f60ebf1ac4","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$342K – $555K • Offers Equity","x-skills-required":["Linux kernel development","C","C++","Python","Linux shell scripting","Rust","PCIe","dma-buf","RDMA","P2P","SR-IOV","IOMMU","ibverbs","ECN/DCQCN"],"x-skills-preferred":["Embedded systems development","Bootloaders","Drivers","Hardware/software integration"],"datePosted":"2026-03-06T18:36:41.086Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, C, C++, Python, Linux shell scripting, Rust, PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, ibverbs, ECN/DCQCN, Embedded systems development, Bootloaders, Drivers, Hardware/software integration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":342000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_37a117ac-7f2"},"title":"Embedded SWE, Consumer Devices","description":"<p><strong>Embedded SWE, Consumer Devices</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Consumer Products</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The <strong>Software Engineering</strong> <strong>Embedded</strong> team builds reliable, high-performance systems on custom hardware. We work closely with hardware engineers to design, optimize, and ship software that bridges cutting-edge devices and real-world constraints like memory, power, and latency. Our work spans early prototyping through product launch, ensuring that our embedded platforms are robust, efficient, and production-ready.</p>\n<p><strong>About the Role</strong></p>\n<p>As an <strong>Embedded Software Engineer</strong>, you will design, implement, and debug software for embedded devices. You’ll own low-level bring-up, write production C/C++ code, and partner closely with hardware teams to deliver reliable, high-performance systems.</p>\n<p>We’re looking for engineers with deep embedded expertise, strong debugging skills, and a passion for building systems that perform under real-world conditions.</p>\n<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model</strong> of four days in the office per week and offer <strong>relocation assistance</strong> to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, implement, and debug software for embedded devices.</li>\n</ul>\n<ul>\n<li>Contribute to defining software requirements, interfaces, and test plans.</li>\n</ul>\n<ul>\n<li>Bring up and debug new boards.</li>\n</ul>\n<ul>\n<li>Analyze performance, memory, and power profiles and implement optimizations.</li>\n</ul>\n<ul>\n<li>Investigate field issues, perform root-cause analysis, and deliver robust fixes.</li>\n</ul>\n<ul>\n<li>Foster good software engineering practices.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep experience shipping embedded systems (around 10+ years).</li>\n</ul>\n<ul>\n<li>Are proficient in C and C++.</li>\n</ul>\n<ul>\n<li>Are familiar with embedded toolchains, operating systems, and debugging tools.</li>\n</ul>\n<ul>\n<li>Have experience with both rapid prototyping and scalable product development.</li>\n</ul>\n<ul>\n<li>(Nice to have) Have experience with Zephyr RTOS.</li>\n</ul>\n<ul>\n<li>(Nice to have) Have worked with networking/wireless stacks (BLE, Wi-Fi).</li>\n</ul>\n<ul>\n<li>(Nice to have) Have experience with robotic system bring-up or Linux kernel development.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_37a117ac-7f2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2710d0c7-8f1c-4e1a-bf7a-4000fc5a8d68","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $325K • Offers Equity","x-skills-required":["C","C++","Embedded toolchains","Operating systems","Debugging tools","Rapid prototyping","Scalable product development","Zephyr RTOS","Networking/wireless stacks","Robotic system bring-up","Linux kernel development"],"x-skills-preferred":["Embedded expertise","Strong debugging skills","Passion for building systems that perform under real-world conditions"],"datePosted":"2026-03-06T18:28:02.693Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, Embedded toolchains, Operating systems, Debugging tools, Rapid prototyping, Scalable product development, Zephyr RTOS, Networking/wireless stacks, Robotic system bring-up, Linux kernel development, Embedded expertise, Strong debugging skills, Passion for building systems that perform under real-world conditions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c0a8f07-6b9"},"title":"Principal Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Principal Software Engineer at their Beijing office. This role sits at the heart of AI infrastructure development, driving innovation in large-scale AI infrastructure. You will be instrumental in designing and implementing high-performance, massively scalable infrastructure required to deploy frontier LLM models.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Principal Software Engineer on the AI Infrastructure team, you will be responsible for designing and implementing innovative system optimization solutions for internal LLM workloads. You will optimize LLM inference workloads through innovative kernel, algorithm, scheduling, and parallelization technologies. You will also continuously develop and maintain internal LLM inference infrastructure, discovering new LLM system optimization needs and innovations.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Keep up to date with and utilize the latest developments in LLM system optimization.</li>\n<li>Take the lead in designing innovative system optimization solutions for internal LLM workloads.</li>\n<li>Optimize LLM inference workloads through innovative kernel, algorithm, scheduling, and parallelization technologies.</li>\n<li>Continuously develop and maintain internal LLM inference infrastructure.</li>\n<li>Discover new LLM system optimization needs and innovations.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>A bachelor&#39;s degree or higher in computer science, engineering, or a related field, PhD is preferred.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Strong programming skills in Python and C/C++.</li>\n<li>5+ years of experience in machine learning system development and optimization.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>A growth mindset and a passion for learning new things.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n<li>Access to cutting-edge technology and resources.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c0a8f07-6b9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-28/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","C/C++","Machine learning system development and optimization"],"x-skills-preferred":["CUDA kernel development and optimization","Experience in optimizing communication layer / kernels for deep learning systems"],"datePosted":"2026-03-06T07:32:22.965Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Beijing"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C/C++, Machine learning system development and optimization, CUDA kernel development and optimization, Experience in optimizing communication layer / kernels for deep learning systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_96cf54a4-999"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Senior Software Engineer at their Beijing office. This role sits at the heart of AI Infrastructure development, driving innovation in large-scale AI infrastructure. You will be instrumental in designing and implementing high-performance, massively scalable infrastructure required to deploy frontier LLM models.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking brilliant and passionate engineers to work with us on the most interesting and challenging problems of AI Infrastructure development. As a Senior Software Engineer, you will be responsible for designing and implementing the high-performance, massively scalable infrastructure required to deploy frontier LLM models through innovative GPU kernel, compression, scheduling and parallelization optimizations.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Keep up to date with and utilize the latest developments in LLM system optimization.</li>\n<li>Discover/solve impactful technical problems, advance state-of-the-art LLM technologies, and translate ideas into production.</li>\n<li>Optimize LLM inference workloads through innovative kernel, algorithm, scheduling, and parallelization technologies.</li>\n<li>Continuously maintain internal LLM inference infrastructure.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>A bachelor&#39;s degree or higher in computer science, engineering, or a related field, PhD is preferred.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Strong programming skills in Python and C/C++.</li>\n<li>2+ years of experience in machine learning system development and optimization.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>A growth mindset and a passion for learning new things.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_96cf54a4-999","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-64/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","C/C++","Machine learning system development and optimization"],"x-skills-preferred":["CUDA kernel development and optimization","Experience in optimizing communication layer / kernels for deep learning systems"],"datePosted":"2026-03-06T07:32:05.702Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Beijing"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C/C++, Machine learning system development and optimization, CUDA kernel development and optimization, Experience in optimizing communication layer / kernels for deep learning systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7917d1eb-6e2"},"title":"Engineering Manager - Inference","description":"<p>We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity&#39;s products and APIs, serving millions of users with state-of-the-art AI capabilities.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>You will own the technical direction and execution of our inference systems while building and leading a world-class team of inference engineers. Our current stack includes Python, PyTorch, Rust, C++, and Kubernetes.</p>\n<ul>\n<li>Lead and grow a high-performing team of AI inference engineers</li>\n<li>Develop APIs for AI inference used by both internal and external customers</li>\n<li>Architect and scale our inference infrastructure for reliability and efficiency</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of engineering experience with 2+ years in a technical leadership or management role</li>\n<li>Deep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM)</li>\n<li>Strong understanding of LLM architecture: Multi-Head Attention, Multi/Grouped-Query Attention, and common layers</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7917d1eb-6e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/2a87ccbf-82ef-4fc7-b1ed-4dd18b11baf9","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300K - $405K","x-skills-required":["ML systems","inference frameworks","LLM architecture"],"x-skills-preferred":["CUDA","Triton","custom kernel development"],"datePosted":"2026-03-04T12:24:50.159Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, inference frameworks, LLM architecture, CUDA, Triton, custom kernel development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}}]}