<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>65fe142e-5fa</externalid>
      <Title>Praktikant*in im Bereich KI-Entwicklung Ultra-High Precision Spraying</Title>
      <Description><![CDATA[<p>Join our team at Bayer and contribute to the development of innovative solutions for the agricultural industry. As a practicum student in the field of AI development for ultra-high precision spraying, you will work with a multidisciplinary team to evaluate and optimize image processing models and camera hardware for plant and weed recognition.</p>
<p>Your tasks will include:</p>
<ul>
<li>Working with a team of product managers, agronomists, and developers to integrate and adapt models into existing applications</li>
<li>Conducting data analyses and visualizing results</li>
<li>Supporting the development of innovative solutions for precision agriculture</li>
</ul>
<p>We offer a dynamic and inclusive work environment where you can bring your ideas and perspectives to the table. Our team is committed to making a positive impact on the world through our work.</p>
<p>As a practicum student, you will have the opportunity to gain hands-on experience and develop new skills in a real-world setting. You will be supported by experienced colleagues and have access to a range of resources and training opportunities.</p>
<p>If you are passionate about AI, data science, and precision agriculture, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI development, image processing, camera hardware, plant and weed recognition, data analysis, precision agriculture, TensorFlow, PyTorch, NVIDIA hardware, Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Agriculture</Industry>
      <Employername>Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that produces crop protection products, seeds, and biotechnology traits.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949975720399</Applyto>
      <Location>Monheim</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>588dfb0e-611</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>
<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>
<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4557835006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f0f66ce3-d78</externalid>
      <Title>Senior GenAI Research Engineer - Optimization and Kernels</Title>
      <Description><![CDATA[<p>As a research engineer on the Scaling team at Databricks, you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art.</p>
<p>You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. Your goal will be to make our customers successful in applying state-of-the-art LLMs and AI systems, and we encode our scientific expertise into our products to make that possible.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Driving performance improvements through advanced optimization techniques including kernel fusion, mixed precision, memory layout optimization, tiling strategies, and tensorization for training-specific patterns</li>
</ul>
<ul>
<li>Designing, implementing, and optimizing high-performance GPU kernels for training workloads (e.g., attention mechanisms, custom layers, gradient computation, activation functions) targeting NVIDIA architectures</li>
</ul>
<ul>
<li>Designing and implementing distributed training frameworks for large language models, including parallelism strategies (data, tensor, pipeline, ZeRO-based) and optimized communication patterns for gradient synchronization and collective operations</li>
</ul>
<ul>
<li>Profiling, debugging, and optimizing end-to-end training workflows to identify and resolve performance bottlenecks, applying memory optimization techniques like activation checkpointing, gradient sharding, and mixed precision training</li>
</ul>
<p>We look for candidates with a strong background in computer science or a related field, hands-on experience writing and tuning CUDA kernels for ML training applications, and a deep understanding of parallelism techniques and memory optimization strategies for large-scale model training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>CUDA, NVIDIA GPU architecture, PyTorch, distributed training frameworks, parallelism techniques, memory optimization strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8297797002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53bd182c-902</externalid>
      <Title>DSP Engineer, EW</Title>
      <Description><![CDATA[<p>Anduril Industries is seeking a highly skilled DSP Engineer to join our team. As a DSP Engineer, you will design, develop, and optimize digital signal processing algorithms and systems for radio direction finding and direction-of-arrival estimation in defense applications.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with a multidisciplinary team of software and hardware engineers to develop software defined radios;</li>
<li>Implementing high-performance, real-time signal processing chains on embedded and hardware platforms to support mission-critical sensing capabilities;</li>
<li>Developing Modeling and Simulation (M&amp;S) code for RADAR techniques and data analysis including Hardware-in-the Loop / Software-in-the-loop (HIL/SIL) testing;</li>
<li>Participating in laboratory and field testing of RF systems and techniques;</li>
<li>Participating in the maturation of RF systems into deployable systems and products.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>5+ years of experience with a BSEE or related field;</li>
<li>Strong foundation in digital signal processing, comms theory, and system engineering with emphasis in direction finding algorithm implementation;</li>
<li>Hands-on experience with direction finding, angle-of-arrival estimation, and multi-antenna signal processing;</li>
<li>Strong experience with DSP implementation for embedded devices including FPGA, Nvidia Jetson, and Software Defined Radios and/or software defined radios;</li>
<li>Strong knowledge of Python and MATLAB;</li>
<li>Experience with CUDA or GPU accelerated frameworks like cuSignal is preferred;</li>
<li>Familiar with deep learning algorithms;</li>
<li>Familiar with wireless communication standards (Bluetooth, 3G/4G/5G, Wi-Fi, SINCGARS, MUOS, etc.).</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Masters or PhD degree in Electrical, Electronics, Computer Engineering, or related fields;</li>
<li>Experience with ML frameworks such as TensorFlow and PyTorch;</li>
<li>Defense, national security, or aerospace domain familiarity through industry or education;</li>
<li>Extensive Digital Signal Processing (DSP) knowledge and experience;</li>
<li>Expertise in Synthetic Aperture Radar (SAR) and/or Inverse SAR (ISAR): Image formation, waveforms, phenomenology, modeling and simulation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Digital Signal Processing, Comms Theory, System Engineering, Direction Finding Algorithm Implementation, Embedded Devices, FPGA, Nvidia Jetson, Software Defined Radios, Python, MATLAB, CUDA, GPU Accelerated Frameworks, Deep Learning Algorithms, Wireless Communication Standards, ML Frameworks, TensorFlow, PyTorch, Defense Domain, National Security, Aerospace Domain, Synthetic Aperture Radar, Inverse SAR, Image Formation, Waveforms, Phenomenology, Modeling and Simulation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://anduril.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5031495007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d799d883-0dd</externalid>
      <Title>Solutions Architect- Networking</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in leading innovation at every turn. You will have the opportunity to demonstrate thought leadership and engage hands-on throughout our customers&#39; entire lifecycle. From establishing their Kubernetes environment to developing proofs of concept, onboarding, and optimizing workloads, you will lead innovation at every turn.</p>
<p>In this role, you will:</p>
<p>Serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on networking technologies within high-performance compute (HPC) environments Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments. Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Networking product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions. Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions. Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption. Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>Who You Are:</p>
<p>B.S. in Computer Science or a related technical discipline, or equivalent experience 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure focusing on building distributed systems or HPC/cloud services, with an expertise focused on infrastructure networking. Fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions Proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences Expertise with a broad range of networking technologies and topics, with a familiarity to understand the needs and use cases is it relates to securing and enabling high performance networking environments. Experience with managing infrastructure networking, Kubernnetes CSI management, and private networking concepts Familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL)</p>
<p>Preferred:</p>
<p>Code contributions to open-source inference frameworks Experience with scripting and automation related to network technologies Experience with building solutions across multi-cloud environments Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>cloud computing, Kubernetes, infrastructure networking, high-performance computing, networking technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), open-source inference frameworks, scripting and automation, multi-cloud environments, latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4568528006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9166d234-4c5</externalid>
      <Title>Solutions Architect - HPC/AI/ML</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers establish their Kubernetes environment, develop proofs of concept, onboard, and optimise workloads. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on AI/ML workloads within high-performance compute (HPC) environments.</p>
<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimisation and suggesting suitable solutions.</p>
<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $225,000 SGD</Salaryrange>
      <Skills>cloud computing concepts, architecture, technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes, code contributions to open-source inference frameworks, scripting and automation related to AI/ML workloads, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider specialising in artificial intelligence and machine learning workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649044006</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1868194d-726</externalid>
      <Title>Operations Engineer, HPC Networking</Title>
      <Description><![CDATA[<p>In this role, you will support the deployment, monitoring, troubleshooting, and maintenance of large-scale InfiniBand fabrics, ensuring their stability and performance.</p>
<p>The ideal candidate will have a strong operations mindset, effective collaboration skills, and the ability to solve complex issues in a dynamic environment.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Regularly monitoring the performance and health of InfiniBand fabrics, including switches, host adapters, and nodes.</li>
<li>Investigating and resolving operational issues within InfiniBand fabrics, such as network connectivity problems and performance bottlenecks.</li>
<li>Assisting with the installation and operational bring-up of large InfiniBand fabrics in collaboration with onsite personnel and customer teams.</li>
<li>Performing routine maintenance and upgrades on InfiniBand switches and control plane components.</li>
<li>Collaborating with HPC cluster operations teams to provide troubleshooting and operational expertise.</li>
</ul>
<p>Investing in our people is one of our top priorities, and we value candidates who can bring their diversified experiences to our teams.</p>
<p>Minimum Qualifications:</p>
<ul>
<li>At least 1 year of experience with InfiniBand or similar networking technologies.</li>
<li>Solid understanding of networking concepts, including architectures, topologies, operational best practices, and troubleshooting.</li>
<li>Experience with Linux system administration and maintenance.</li>
<li>Proficiency in at least one scripting language.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Hands-on experience with Nvidia UFM or similar fabric management tools.</li>
<li>Familiarity with SLURM job scheduler and its role in HPC environments.</li>
<li>Experience with monitoring and visualization platforms such as Grafana or Prometheus.</li>
<li>Experience with operational tooling and automation frameworks like Ansible.</li>
<li>Knowledge of data center operations, including server racks, and cabling.</li>
<li>Python or Bash scripting.</li>
</ul>
<p>Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $110,000 to $179,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,000 to $179,000</Salaryrange>
      <Skills>InfiniBand, Linux system administration, Scripting language, Networking concepts, Architectures, Topologies, Operational best practices, Troubleshooting, Nvidia UFM, SLURM job scheduler, Grafana, Prometheus, Ansible, Data center operations, Server racks, Cabling, Python, Bash scripting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4673462006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8092b6e-7f5</externalid>
      <Title>Bare Metal Support Engineer</Title>
      <Description><![CDATA[<p>As a Bare Metal Support Engineer at CoreWeave, you will be responsible for supporting, operating, and maintaining CoreWeave&#39;s extensive GPU fleet across our growing data centers in the U.S., Europe, and beyond.</p>
<p>You will work closely with customers, data center technicians, and engineering teams to ensure the reliability, performance, and scalability of our infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Providing high-level support for customers utilizing bare-metal GPU fleets on CoreWeave Cloud.</li>
<li>Diagnosing, triaging, and investigating reported customer issues and high-priority incidents, identifying root causes and escalating when necessary.</li>
<li>Developing a deep understanding of customer workloads and use cases to provide tailored technical support.</li>
<li>Coordinating remote troubleshooting and hardware interventions with Data Center Technicians.</li>
<li>Creating and maintaining internal documentation, including troubleshooting guides, best practices, and knowledge base articles.</li>
<li>Participating in an on-call rotation to support production clusters and ensure operational reliability.</li>
<li>Collaborating with engineering teams to improve hardware reliability, software stability, and system performance.</li>
<li>Implementing automation and scripting to streamline support workflows and reduce manual interventions.</li>
<li>Performing in-depth log analysis and debugging across multiple layers of the stack (firmware, drivers, hardware).</li>
<li>Providing feedback to internal teams on common support issues to drive continuous improvements.</li>
<li>Working with networking teams to troubleshoot connectivity issues affecting customer workloads.</li>
<li>Supporting supercomputing infrastructure running GPU workloads at scale.</li>
<li>Driving operational excellence by refining internal processes and support methodologies.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Experience in data centers, GPU clusters, server deployments, system administration, or hardware troubleshooting.</li>
<li>Demonstrated experience driving resolutions and continuous improvements across cross-functional environments and teams within a data center environment.</li>
<li>Intermediate knowledge of Linux (Ubuntu, CentOS, or similar), including command-line proficiency.</li>
<li>Experience with NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing (HPC), and large-scale data center environments.</li>
<li>Experience in networking fundamentals (TCP/IP, VLANs, DNS, DHCP) and troubleshooting tools.</li>
<li>Hands-on experience with firmware updates, BIOS configurations, and driver management.</li>
<li>Experience analyzing system logs and debugging issues across firmware, drivers, and hardware layers.</li>
<li>Experience working with Jira, Confluence, Notion, or other issue-tracking and documentation platforms.</li>
<li>Experience in scripting and automation (Python, Bash, Ansible, or similar).</li>
</ul>
<p>If you&#39;re a curious and analytical individual with a passion for problem-solving and a desire to work in a fast-paced environment, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$83,000 to $132,000</Salaryrange>
      <Skills>Linux, GPU clusters, server deployments, system administration, hardware troubleshooting, NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing, large-scale data center environments, networking fundamentals, troubleshooting tools, firmware updates, BIOS configurations, driver management, system logs, debugging issues, Jira, Confluence, Notion, issue-tracking, documentation platforms, scripting, automation, Kubernetes, Docker, containerized infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that delivers a platform of technology, tools, and teams to enable innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4560350006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>57af9e4c-fa7</externalid>
      <Title>Firmware Engineer, SPX</Title>
      <Description><![CDATA[<p>As a Firmware Engineer, you will focus on AMI SPX-based BMC firmware development for GB200 server platforms. You will collaborate with cross-functional teams to develop, enhance, and optimize embedded firmware modules that power CoreWeave&#39;s large-scale data center deployments.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop &amp; Maintain: Contribute to the design, implementation, and testing of BMC firmware in AMI SPX environments using C.</li>
<li>Integrate: Work cross-functionally with hardware, software, and QA teams to ensure seamless firmware-hardware integration.</li>
<li>Debug &amp; Optimize: Perform issue triage, root-cause analysis, and implement bug fixes and performance improvements.</li>
<li>Testing &amp; Validation: Conduct firmware validation across multiple hardware revisions and test environments.</li>
<li>Document: Produce clear and maintainable documentation for code, configurations, and testing procedures.</li>
<li>Collaborate &amp; Learn: Work alongside senior engineers to expand your expertise in firmware stack design, Redfish, and system-level architecture.</li>
</ul>
<p>The base salary range for this role is $109,000 to $160,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer: The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $160,000</Salaryrange>
      <Skills>AMI MegaRAC/SPX firmware, C, embedded development workflows, Linux, Git, debugging tools (GDB, JTAG, or equivalent), hardware interfaces (I2C, SPI, UART), firmware build systems, BMC architectures, DMTF Redfish, IPMI standards, GB200 or other NVIDIA Grace Hopper server platforms, Python or Bash for testing or automation, Jenkins or similar CI/CD environments, open-source firmware or embedded projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4615564006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9d3cf8fb-aa4</externalid>
      <Title>Senior Engineer, Embedded OS</Title>
      <Description><![CDATA[<p>As an Embedded OS Engineer at Shield AI, you will design, develop, and optimize the operating system components for our unmanned aerial systems (UAS) to operate efficiently and reliably. Your work will ensure that the software infrastructure of our UAVs meets the high standards required for autonomous operations in dynamic and challenging environments.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, develop, and maintain the Linux-based or commercial real-time operating system components for UAVs, focusing on performance, reliability, and security.</li>
<li>Optimise the OS for concurrent processing and efficient resource management.</li>
<li>Collaborate with hardware engineers, software developers, and autonomy engineers to ensure seamless integration of OS components with other subsystems.</li>
<li>Develop and maintain drivers and middleware for various hardware components and sensors, especially camera and timing systems.</li>
<li>Conduct rigorous testing and debugging to ensure the stability and robustness of the OS.</li>
<li>Stay updated with the latest advancements in OS technologies and apply best practices to our systems.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Minimum of 5 years of related experience with a Bachelor&#39;s degree in Computer Science (or related field); or 3 years and a Master&#39;s degree; or 2 years with a PhD; or equivalent work experience.</li>
<li>Proven experience in OS development, particularly in real-time and embedded systems.</li>
<li>Strong understanding of RTOS concepts, concurrent programming, and resource management.</li>
<li>Proficiency in programming languages such as C, C++, Python, or similar.</li>
<li>Experience with developing drivers and middleware for hardware components.</li>
<li>Familiarity with cyber security principles and practices in embedded systems, including secure boot and data-at-rest encryption.</li>
<li>Excellent communication skills, with the ability to effectively collaborate with multidisciplinary teams and external stakeholders.</li>
<li>Demonstrated track record of assuming ownership over development processes and features and delivering outstanding outcomes.</li>
<li>Proven track record of successfully shipping products, showcasing the ability to navigate through development cycles, overcome obstacles, and deliver high-quality solutions to meet project deadlines and exceed expectations in a fast-paced environment.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience customising, deploying, and maintaining Linux distributions created with Yocto on various hardware platforms</li>
<li>Experience customising, deploying, and maintaining RTOS&#39;s such as VxWorks, RTLinux, or Green Hills</li>
<li>Experience with Nvidia OS customisation and maintenance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000 - $185,000 a year</Salaryrange>
      <Skills>Embedded OS development, Real-time and embedded systems, RTOS concepts, Concurrent programming, Resource management, C, C++, Python, or similar, Driver development, Middleware development, Cyber security principles, Secure boot, Data-at-rest encryption, Linux distribution creation with Yocto, RTOS customisation and maintenance, Nvidia OS customisation and maintenance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015. It develops intelligent systems for protecting service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/d5fdafe1-2e50-4543-98b1-38f3a3d26e12</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8c5fdc6a-a68</externalid>
      <Title>Senior Engineer, Build and CI</Title>
      <Description><![CDATA[<p>As a Hivemind Build and CI engineer, you will design and implement engineering-centric automation across the organisation. You will work closely with product development teams, implementing policies and guidelines into the continuous integration and delivery systems.</p>
<p>This role requires you to be very hands-on and contribute to discussions with cross-functional teams across the organisation. We embrace an attitude that focuses on solving the root cause of problems efficiently.</p>
<p>A large part of your day-to-day will be in our build pipelines, build configuration management, and focusing on making changes to increase developer iteration time.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Embed with a product engineering team as their primary Software Operations partner, working closely with engineers to improve how software is built, tested, and delivered.</li>
</ul>
<ul>
<li>Design, implement, and continuously improve build pipelines, CI workflows, and supporting tooling with a focus on scalability, reliability, and developer experience.</li>
</ul>
<ul>
<li>Apply strong C++ and/or Go software development experience to build and maintain robust build and CI solutions.</li>
</ul>
<ul>
<li>Reduce iteration time and friction by improving build performance, test reliability, and CI feedback loops.</li>
</ul>
<ul>
<li>Debug and resolve complex build, test, and CI failures using disciplined root-cause analysis.</li>
</ul>
<ul>
<li>Influence technical direction without formal authority by earning trust through collaboration, technical credibility, and a deep understanding of team and program constraints.</li>
</ul>
<ul>
<li>Promote best practices in build hygiene, CI/CD design, dependency management, and software development workflows that scale across teams and programs.</li>
</ul>
<ul>
<li>Apply knowledge of software design patterns and architectural principles to design maintainable CI systems and build abstractions.</li>
</ul>
<ul>
<li>Coach and mentor product engineers on build and CI topics, helping teams make better design decisions and understand trade-offs.</li>
</ul>
<ul>
<li>Represent the Software Operations organisation within the product team, acting as a bridge between platform capabilities and product needs.</li>
</ul>
<ul>
<li>Advocate for practical, production-ready solutions that improve developer productivity without sacrificing velocity or quality.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS in computer science or related engineering field with 3+ years of professional experience.</li>
</ul>
<ul>
<li>Experience with configuration management tools (Makefile, CMake, Conan, Bazel, etc.).</li>
</ul>
<ul>
<li>Strong demonstrated proficiency in continuous integration/delivery (e.g. Github Actions, ADO, TeamCity, etc.).</li>
</ul>
<ul>
<li>Strong understanding of C++ (or other compiled language), Linux and CMake.</li>
</ul>
<ul>
<li>Strong knowledge of APIs, web services, and identity access management.</li>
</ul>
<ul>
<li>Strong knowledge of containers (e.g. Docker, Podman, etc.).</li>
</ul>
<ul>
<li>Strong knowledge of scripting languages (Bash, Python, PowerShell).</li>
</ul>
<ul>
<li>Strong knowledge of Git.</li>
</ul>
<ul>
<li>Strong system administration in Linux (w/ Windows a bonus).</li>
</ul>
<ul>
<li>Strong desire to learn and grow on the job.</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Strong Experience with Conan Package Manager.</li>
</ul>
<ul>
<li>Experience with Rust in a production environment.</li>
</ul>
<ul>
<li>Experience with Hardware in the Loop build/deploy/test systems.</li>
</ul>
<ul>
<li>Experience owning build infrastructure.</li>
</ul>
<ul>
<li>Experience with NVIDIA Jetson products.</li>
</ul>
<p><strong>Salary and Benefits:</strong></p>
<p>$120,000 - $180,000 a year</p>
<p>Pay within range listed + Bonus + Benefits + Equity</p>
<p>Temporary employee offer package:</p>
<p>Pay within range listed above + temporary benefits package (applicable after 60 days of employment)</p>
<p>Salary compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. All offers are contingent on a cleared background and possible reference check. Military fellows and part-time employees are not eligible for benefits. Please speak to your talent acquisition representative for more information.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$120,000 - $180,000 a year</Salaryrange>
      <Skills>configuration management tools, continuous integration/delivery, C++, Linux, CMake, APIs, web services, identity access management, containers, scripting languages, Git, system administration in Linux, Conan Package Manager, Rust, Hardware in the Loop build/deploy/test systems, NVIDIA Jetson products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for protecting service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/6cdd98c9-6579-4609-8ac3-9fc0604f6160</Applyto>
      <Location>San Diego</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cdd309a3-028</externalid>
      <Title>Strategic Partner Lead - OEM and Neoclouds</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>As part of our rapid global expansion, we are looking for a Strategic Cloud Partner Lead to join our US Partner GTM team. Our ambitions are high and wide. To bring to market the most advanced technology, we are building a best-in-class GTM team. Builders wanted!</p>
<p>In this role, you will be instrumental in driving the strategy and execution for our strategic OEM and NeoCloud partnerships globally.</p>
<p>Responsibilities</p>
<ul>
<li><p>Build strategic partnership plans and executive relationships:</p>
<ul>
<li>Identify and develop revenue generating strategic partnerships with OEMs like Dell, HPE, Cisco and neoclouds like CoreWeave, Lambda, Nebius, and others.</li>
<li>Negotiate or enhance partnership agreements and contracts, ensuring they are mutually beneficial and align with our company&#39;s interests.</li>
<li>Develop differentiated offerings to take Mistral product and model offerings to market with OEM and neocloud partners.</li>
<li>Develop and execute global and regional strategic plans to drive pipeline and co-selling activities with OEMs and Neoclouds.</li>
<li>Own end-to-end partnership responsibilities including financial business outcomes, co-sell, technical integrations, joint marketing campaigns, governance and reporting, and executive engagement strategy.</li>
</ul>
</li>
<li><p>Execute OEM and neocloud GTM plans to drive results:</p>
<ul>
<li>Execute strategic partnership plans to achieve pipeline and revenue results as a critical part of the US GTM team.</li>
<li>Serve as a point of escalation to help resolve field conflicts using your network of key contacts from OEM and neocloud partners and your knowledge of each partner’s strategic priorities.</li>
<li>Lead the development and communication of the partnership performance analysis, including financial performance, agreed upon metrics, and key insights.</li>
<li>Provide enablement and support to strategic partners to ensure they are effectively promoting and selling our products or services.
Company builder and collaborator:</li>
</ul>
</li>
<li><p>Collaborate with cross-functional teams to ensure the successful implementation of partnership initiatives.</p>
</li>
<li><p>Work with Mistral engineering and products team to help advance the customer experience for Mistral’s models and tools on OEM and Neocloud technology, including defining new routes to market with OEM partners.</p>
</li>
<li><p>Develop a strong collaboration with Mistral’s GTM team to help drive pipeline and help advance opportunities to closure.</p>
</li>
<li><p>Align Mistral, OEM and neocloud partner regional sales teams with Mistral’s sales counterparts and facilitate a regular MBR cadence.
Who you are</p>
</li>
<li><p>Bachelor&#39;s degree in Business, Marketing, Information Technology, or a related field. An MBA or related advanced degree is preferred.</p>
</li>
<li><p>Minimum of 10 years of experience in GTM roles with 5+ years experience in partner management, business development or sales roles.</p>
</li>
<li><p>Knowledge of the NVIDIA partner ecosystem and the sales motion between NVIDIA and OEM and neocloud partners.</p>
</li>
<li><p>Ideally, a strong network at global level within OEM and neocloud partners.</p>
</li>
<li><p>Excellent negotiation, communication, and interpersonal skills.</p>
</li>
<li><p>Strong understanding of the technology or software industry, with a focus on AI infrastructure and partnerships.</p>
</li>
<li><p>Ability to travel as needed to meet with strategic partners and attend industry events.</p>
</li>
<li><p>Strong analytical skills, with the ability to monitor and analyze partnership performance and provide actionable insights.</p>
</li>
<li><p>Builder and self-starter with the ability to work independently and as part of a team.
What we offer</p>
</li>
<li><p>Competitive cash salary and equity</p>
</li>
<li><p>Healthcare: Medical/Dental/Vision covered for you and your family</p>
</li>
<li><p>401K: 6% matching</p>
</li>
<li><p>Transportation: Reimburse office parking charges, or $120/month for public transport</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis</p>
</li>
<li><p>Sport: $120/month reimbursement for gym membership</p>
</li>
<li><p>Meal voucher: $400 monthly allowance for meals</p>
</li>
<li><p>Visa sponsorship</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GTM, Partner Management, Business Development, Sales, NVIDIA Partner Ecosystem, AI Infrastructure, Partnerships</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/2e48b51e-4872-41ee-833d-f8b57d25cf0d</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8e582153-6af</externalid>
      <Title>Senior DevOps Lead - Cloud &amp; Autonomous System</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>
<p>We are a small company with under 100 employees, operating with the energy of a startup. However, we&#39;re also publicly traded, which means our employees get access to the liquidity of our publicly-traded equity.</p>
<p>As a Senior DevOps Lead at Cyngn, you will play a vital role in architecting and managing infrastructure across cloud and autonomous vehicle systems. This position combines traditional cloud DevOps leadership with specialized expertise in robotics and autonomous systems infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and architect cloud and vehicle infrastructure initiatives across AWS and ROS/Linux environments</li>
<li>Design and implement scalable solutions for both cloud services and autonomous vehicle systems</li>
<li>Establish and maintain DevOps best practices, CI/CD pipelines, and infrastructure as code</li>
<li>Drive observability, monitoring, and incident response strategies</li>
<li>Optimize performance and cost efficiency of cloud and edge computing resources</li>
<li>Mentor team members and foster a developer-friendly environment</li>
<li>Manage on-call rotations and incident response processes</li>
<li>Architect solutions for processing and storing large-scale vehicle telemetry data</li>
<li>Lead security initiatives and compliance efforts across infrastructure</li>
</ul>
<p>Requirements</p>
<ul>
<li>10+ years of relevant DevOps/Infrastructure experience</li>
<li>Proven track record as a technical lead in platform or infrastructure teams</li>
<li>Advanced expertise in AWS services, infrastructure as code (Terraform), and Kubernetes</li>
<li>Strong experience with service mesh (Istio) and Helm/Kustomize</li>
<li>Deep understanding of ROS/ROS2 and Linux kernel configurations</li>
<li>Experience with GPU configurations and ML infrastructure</li>
<li>Expertise in ARM and NVIDIA CUDA platform configurations</li>
<li>Strong programming skills in Python and shell scripting</li>
<li>Experience with infrastructure automation (Ansible)</li>
<li>Expertise in CI/CD tools (Jenkins, GitHub Actions)</li>
<li>Strong system architecture and design skills</li>
<li>Excellence in technical documentation</li>
<li>Outstanding problem-solving abilities</li>
<li>Strong leadership and mentoring capabilities</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with autonomous vehicle systems</li>
<li>Track record of optimizing GPU-based ML infrastructure</li>
<li>Experience with large-scale IoT deployments</li>
<li>Contributions to open-source projects</li>
<li>Experience with real-time systems and low-latency requirements</li>
<li>Expertise in security implementations including SSO, IdP, and AWS Cognito</li>
<li>Experience with JFrog artifactory and container registry management</li>
<li>Proficiency in AWS IoT Greengrass</li>
<li>Experience with container resource management on edge devices</li>
<li>Understanding of CPU affinity and priority scheduling</li>
<li>Track record of implementing cost optimization strategies</li>
<li>Experience with scaling systems both horizontally and vertically</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</li>
<li>Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)</li>
<li>Company 401(k)</li>
<li>Commuter Benefits</li>
<li>Flexible vacation policy</li>
<li>Sabbatical leave opportunity after five years with the company</li>
<li>Paid Parental Leave</li>
<li>Daily lunches for in-office employees</li>
<li>Monthly meal and tech allowances for remote employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,000-225,000 per year</Salaryrange>
      <Skills>AWS services, infrastructure as code (Terraform), Kubernetes, service mesh (Istio), Helm/Kustomize, ROS/ROS2, Linux kernel configurations, GPU configurations, ML infrastructure, ARM, NVIDIA CUDA platform configurations, Python, shell scripting, infrastructure automation (Ansible), CI/CD tools (Jenkins, GitHub Actions), system architecture and design skills, technical documentation, problem-solving abilities, leadership and mentoring capabilities, autonomous vehicle systems, optimizing GPU-based ML infrastructure, large-scale IoT deployments, open-source projects, real-time systems and low-latency requirements, security implementations including SSO, IdP, and AWS Cognito, JFrog artifactory and container registry management, AWS IoT Greengrass, container resource management on edge devices, CPU affinity and priority scheduling, cost optimization strategies, scaling systems both horizontally and vertically</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/1c31b7d8-cf85-472f-9358-1e10189cf815</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c1dcea75-d5a</externalid>
      <Title>Member of Technical Staff - Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced engineer to join our team in Freiburg, Germany or San Francisco, USA. As a Member of Technical Staff - Infrastructure Engineer, you will be responsible for maintaining and scaling our research infrastructure, ensuring health and optimizing components to extract peak performance from the system. You will also collaborate with research teams to deeply understand their infrastructure needs and design solutions that balance performance with cost efficiency.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Maintaining research infrastructure, ensuring health, and optimizing components to extract peak performance from the system (both on application and infrastructure side)</li>
<li>Scaling infrastructure to meet growing research demands while maintaining reliability and performance</li>
<li>Collaborating with research teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency</li>
<li>Identifying and resolving performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale</li>
<li>Building and evolving telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets</li>
<li>Participating in on-call rotations and incident response to maintain system reliability</li>
</ul>
<p>Technical focus includes:</p>
<ul>
<li>Python, Bash, Go</li>
<li>Kubernetes</li>
<li>Nvidia GPU drivers and operators</li>
<li>OTel, Prometheus</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Experience building or operating large-scale training platforms</li>
<li>Worked with large-scale compute clusters (GPUs)</li>
<li>Proven ability to debug performance and reliability issues across large distributed fleets</li>
<li>Strong problem-solving skills and ability to work independently</li>
<li>Strong communication skills and the ability to work effectively with both internal and external partners</li>
<li>Deep knowledge of modern cloud infrastructure including Kubernetes, Infrastructure as Code, AWS, and GCP</li>
<li>Experience with SLURM</li>
</ul>
<p>We offer a competitive base annual salary of $180,000-$300,000 USD and a hybrid work model with a meaningful in-person presence.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$300,000 USD</Salaryrange>
      <Skills>Python, Bash, Go, Kubernetes, Nvidia GPU drivers, Nvidia GPU operators, OTel, Prometheus, Experience building or operating large-scale training platforms, Worked with large-scale compute clusters (GPUs), Proven ability to debug performance and reliability issues across large distributed fleets, Strong problem-solving skills and ability to work independently, Strong communication skills and the ability to work effectively with both internal and external partners, Deep knowledge of modern cloud infrastructure including Kubernetes, Infrastructure as Code, AWS, and GCP, Experience with SLURM</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Black Forest Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/blackforestlabs.com.png</Employerlogo>
      <Employerdescription>Black Forest Labs develops foundational technologies for image and video creation, including Latent Diffusion, Stable Diffusion, and FLUX.</Employerdescription>
      <Employerwebsite>https://www.blackforestlabs.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/blackforestlabs/jobs/4925659008</Applyto>
      <Location>Freiburg (Germany), San Francisco (USA)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bcd6ddb9-0e2</externalid>
      <Title>Computational Fluid Dynamics (CFD) Internship</Title>
      <Description><![CDATA[<p>Our internship programs offer real-world projects, hands-on experience, and opportunities to collaborate with teams globally. As a Computational Fluid Dynamics (CFD) Intern, you will design, implement, and document custom Python applications that programmatically automate CFD workflows and streamline engineering processes. You will capture and implement industry best practices related to modeling and simulation of engineered products. You will create compelling technical marketing content to showcase outcomes of your developed frameworks and examples. You will present results and progress regularly to Synopsys teams, supporting testing efforts to determine new feature readiness for commercial release. You will collaborate with Customer Excellence teams for technical discussions and to gather requirements, contributing to robust and innovative solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, implement, and document custom Python applications using PyAnsys and other libraries</li>
<li>Capture and implement industry best practices related to modeling and simulation of engineered products</li>
<li>Create compelling technical marketing content to showcase outcomes of your developed frameworks and examples</li>
<li>Present results and progress regularly to Synopsys teams</li>
<li>Collaborate with Customer Excellence teams for technical discussions and to gather requirements</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Currently enrolled in a Bachelor&#39;s or Master&#39;s program in engineering or a related technical discipline at an accredited university in the United States or Canada with a minimum 3.0 GPA</li>
<li>Strong proficiency in Python</li>
<li>Experience with package management (Poetry/Pip), version control (GitHub), PyAnsys, and web applications is highly desirable</li>
<li>Experience with NVIDIA Omniverse (Kit, USD, or Create) and knowledge of USD (Universal Scene Description) are highly desirable</li>
<li>Exposure to Ansys CFD software, CAD packages, and a general understanding of IT hardware/system integration is highly desirable</li>
<li>Excellent written and verbal communication skills in English</li>
<li>Self-motivated with strong problem-solving, communication, and time management skills</li>
</ul>
<p>Key program facts include:</p>
<ul>
<li>Program length: 3 months</li>
<li>Location: Canonsburg, Pennsylvania; Evanston, Illinois; or Waterloo, Ontario</li>
<li>Full-time/part-time: Full-time</li>
<li>Start date: May or June 2026 (flexible based on academic calendar)</li>
</ul>
<p>The base salary range across the U.S. for this role is between $32.00-$48.00 per hour. In addition, this role may be eligible for an annual bonus, equity, and other discretionary bonuses. Synopsys offers comprehensive health, wellness, and financial benefits as part of a competitive total rewards package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$32.00-$48.00 per hour</Salaryrange>
      <Skills>Python, PyAnsys, NVIDIA Omniverse, USD, Ansys CFD software, CAD packages, IT hardware/system integration, Package management (Poetry/Pip), Version control (GitHub), Web applications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys creates high-performance silicon chips that help build a healthier, safer, and more sustainable world.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/canonsburg/summer-2026-computational-fluid-dynamics-cfd-internship-16235/44408/93286401680</Applyto>
      <Location>Canonsburg</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>baea7339-8a8</externalid>
      <Title>Sr. Systems Sales Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Sr. Systems Sales Engineer who combines strong technical depth with a passion for solving complex customer challenges.</p>
<p>You will design end-to-end enterprise solutions, guide customers through technical decision-making, and partner with sales to expand Corsair&#39;s footprint in high-performance computing and AI-driven workloads. The ideal candidate merges advanced systems knowledge with customer-facing expertise to shape solutions, support sales, and accelerate adoption of Corsair&#39;s enterprise platforms.</p>
<p>**Key Responsibilities:*</p>
<ul>
<li>Platform &amp; Product Strategy: Collaborate with product management, engineering teams, and HPC integration partners to shape the roadmap for workstations and HPC platforms. Identify and evaluate emerging technologies, market trends, and evolving workloads to inform product strategies and unlock new business opportunities.</li>
</ul>
<ul>
<li>New Product Introduction (NPI): Develop and drive NPI readiness plans including technical documentation, sales enablement resources, and customer-facing solution guides. Ensure smooth product rollout by aligning engineering, marketing, support, and ecosystem partners.</li>
</ul>
<ul>
<li>AI &amp; Developer Ecosystem Engagement: Align with AI software partners, SDK/tool providers, and developer communities to build value-added integrations and optimize emerging AI workloads on Corsair platforms and provide architecture-level guidance to support AI, ML, and HPC applications.</li>
</ul>
<ul>
<li>Customer Solutions &amp; Technical Leadership: Design system- and application-level solutions based on customer requirements; perform diagnostics, optimization, and version upgrade management and act as a technical subject matter expert for enterprise accounts, providing advanced troubleshooting guidance and deployment support.</li>
</ul>
<ul>
<li>Client Relationship &amp; Escalation Management: Build and maintain strong customer relationships through effective communication, pre-sales support, and solution clarity. Manage hardware escalations by coordinating with internal teams and vendor partners to ensure timely issue resolution and serve as a trusted hardware and technical SME across internal and external engagements.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, or related field; equivalent practical experience (10+ years) considered.</li>
</ul>
<ul>
<li>Extensive experience in high-performance computing, workstation architecture, or enterprise systems design.</li>
</ul>
<ul>
<li>Strong background in Solutions Architecture, Sales Engineering, Product Marketing, or ODM platform development.</li>
</ul>
<ul>
<li>Deep knowledge of Linux ecosystems, software build pipelines, and GPU computing technologies (NVIDIA CUDA, AMD ROCm, PCIe, InfiniBand).</li>
</ul>
<ul>
<li>Excellent communication and leadership skills, with the ability to translate complex technical concepts into clear business value.</li>
</ul>
<p>For roles that are based at our headquarters in Milpitas, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future.</p>
<p>Annual Salary Range $165,000—$180,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000—$180,000 USD</Salaryrange>
      <Skills>Linux ecosystems, software build pipelines, GPU computing technologies (NVIDIA CUDA, AMD ROCm, PCIe, InfiniBand), high-performance computing, workstation architecture, enterprise systems design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Corsair</Employername>
      <Employerlogo>https://logos.yubhub.co/corsair.com.png</Employerlogo>
      <Employerdescription>Corsair designs and manufactures high-performance computer components and peripherals.</Employerdescription>
      <Employerwebsite>https://www.corsair.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://edix.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/8694</Applyto>
      <Location>Milpitas Nous found city name, so using empty string</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>ba6b26fb-c39</externalid>
      <Title>Strategic Partner Lead - OEM and Neoclouds</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany and Singapore. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>As part of our rapid global expansion, we are looking for a Strategic Cloud Partner Lead to join our US Partner GTM team. Our ambitions are high and wide. To bring to market the most advanced technology, we are building a best-in-class GTM team. Builders wanted!</p>
<p>In this role, you will be instrumental in driving the strategy and execution for our strategic OEM and NeoCloud partnerships globally.</p>
<p>Responsibilities</p>
<ul>
<li>Build strategic partnership plans and executive relationships:</li>
</ul>
<p>Identify and develop revenue generating strategic partnerships with OEMs like Dell, HPE, Cisco and neoclouds like CoreWeave, Lambda, Nebius, and others.
Negotiate or enhance partnership agreements and contracts, ensuring they are mutually beneficial and align with our company&#39;s interests.
Develop differentiated offerings to take Mistral product and model offerings to market with OEM and neocloud partners.
Develop and execute global and regional strategic plans to drive pipeline and co-selling activities with OEMs and Neoclouds.
Own end-to-end partnership responsibilities including financial business outcomes, co-sell, technical integrations, joint marketing campaigns, governance and reporting, and executive engagement strategy.</p>
<p>Execute OEM and neocloud GTM plans to drive results:</p>
<p>Execute strategic partnership plans to achieve pipeline and revenue results as a critical part of the US GTM team.
Serve as a point of escalation to help resolve field conflicts using your network of key contacts from OEM and neocloud partners and your knowledge of each partner’s strategic priorities.
Lead the development and communication of the partnership performance analysis, including financial performance, agreed upon metrics, and key insights.
Provide enablement and support to strategic partners to ensure they are effectively promoting and selling our products or services.</p>
<p>Company builder and collaborator:</p>
<p>Collaborate with cross-functional teams to ensure the successful implementation of partnership initiatives.
Work with Mistral engineering and products team to help advance the customer experience for Mistral’s models and tools on OEM and Neocloud technology, including defining new routes to market with OEM partners.
Develop a strong collaboration with Mistral’s GTM team to help drive pipeline and help advance opportunities to closure.
Align Mistral, OEM and neocloud partner regional sales teams with Mistral’s sales counterparts and facilitate a regular MBR cadence.</p>
<p>Requirements</p>
<ul>
<li>Bachelor&#39;s degree in Business, Marketing, Information Technology, or a related field. An MBA or related advanced degree is preferred.
Minimum of 10 years of experience in GTM roles with 5+ years experience in partner management, business development or sales roles.
Knowledge of the NVIDIA partner ecosystem and the sales motion between NVIDIA and OEM and neocloud partners.
Ideally, a strong network at global level within OEM and neocloud partners.
Excellent negotiation, communication, and interpersonal skills.
Strong understanding of the technology or software industry, with a focus on AI infrastructure and partnerships.
Ability to travel as needed to meet with strategic partners and attend industry events.
Strong analytical skills, with the ability to monitor and analyze partnership performance and provide actionable insights.
Builder and self-starter with the ability to work independently and as part of a team.</li>
</ul>
<p>What we offer</p>
<p>💰 Competitive cash salary and equity</p>
<p>🚑 Healthcare: Medical/Dental/Vision covered for you and your family</p>
<p>👴🏻 401K: 6% matching</p>
<p>🚴 Transportation: Reimburse office parking charges, or $120/month for public transport</p>
<p>💡 Coaching: we offer BetterUp coaching on a voluntary basis</p>
<p>🥎 Sport: $120/month reimbursement for gym membership</p>
<p>🥕 Meal voucher: $400 monthly allowance for meals</p>
<p>🌎 Visa sponsorship</p>
<p>🤝 Coaching: we offer BetterUp coaching on a voluntary basis</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Partner management, Business development, Sales, NVIDIA partner ecosystem, AI infrastructure, Partnerships</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/2e48b51e-4872-41ee-833d-f8b57d25cf0d</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>2c81c083-464</externalid>
      <Title>Cloud Machine Learning Evangelist</Title>
      <Description><![CDATA[<p>At Hugging Face, we&#39;re on a journey to democratize good AI. As a Cloud Machine Learning Evangelist, your goal will be to increase the impact of the Hugging Face ML Cloud team by educating the community of ML practitioners on how they can benefit by accelerating their training and inference workloads.</p>
<p>The Hugging Face ML Cloud team is working through strategic collaborations with the most used Clouds (AWS, GCP, Azure, Cloudflare), AI Accelerators (incl. NVIDIA, AMD, Intel, Gaudi, Inferentia, TPU), and Systems (Dell, Nutanix), to make it easy for the community to use Hugging Face models and libraries on these compute platforms.</p>
<p>This role is not a marketing role, or a business development role. Your impact will be driving visibility and usage of integrations with strategic partners, through activities including:</p>
<ul>
<li>Publishing technical blog posts</li>
<li>Contributing documentation and code examples</li>
<li>Speaking to business and technical audiences at partner conferences,</li>
<li>Participating in, or producing webinars</li>
<li>Building and evangelizing demos</li>
<li>Leading GTM conversations with strategic partners.</li>
</ul>
<p>You will be at the forefront of Generative AI (and how to build practical stuff with open source). You will work hand in hand with the most important companies in AI. You will enjoy a lot of autonomy and full creative control, with the goal to have 10x more impact than a similar role at a big tech corporation.</p>
<p>About You</p>
<p>You are passionate about ML Engineering, building practical AI applications, putting them in production, and accelerating them to the best of the Cloud ability. You love learning new challenging engineering concepts and technologies, and discussing them with engineers. You appreciate a good Developer Experience, and take pride in your code being easy to understand. You are a great communicator and educator, comfortable (as much as one can be!) with public speaking to technical audiences. You love engaging with the ML community in a positive and helpful way. Existing engagement in social platforms (GitHub, LinkedIn, Twitter, Reddit, etc) or other communication/education channels is expected. Having experience in Open Source development will be helpful.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud Machine Learning, Generative AI, Open Source Development, ML Engineering, Developer Experience, NVIDIA, AMD, Intel, Gaudi, Inferentia, TPU, Dell, Nutanix</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hugging Face</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Hugging Face is a platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</Employerdescription>
      <Employerwebsite>https://huggingface.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/F24E2E5058</Applyto>
      <Location>New York, New York</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>290c3d28-4b2</externalid>
      <Title>Partner Solution Architect - ASEAN</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany and Singapore. We are a diverse workforce that thrives in competitive environments and is committed to driving innovation.</p>
<p>Why This Role Matters</p>
<p>You will be the technical linchpin between Mistral and our strategic partners in ASEAN (Nvidia, Dell, Hyperscalers, Global System Integrators), translating our open-weight models and sovereign AI architecture into deployable, scalable solutions.</p>
<p>By designing joint architectures, influencing partner GTM motions, and earning a seat at the CIO/CTO table, you will accelerate Mistral’s technical credibility and deployment velocity across Asia Pacific.</p>
<p>This is a foundational role where you will define how open-weight AI is operationalized at scale in the region.</p>
<p>What You Will Do</p>
<p><strong>Partner Technical Leadership &amp; Architecture Design</strong></p>
<ul>
<li>Lead the technical design, deployment, and enablement of Mistral’s partner solutions, bridging our AI models with partner infrastructure (Nvidia, Dell, Hyperscalers, GSIs) to deliver scalable AI Labs, AI Factories, and sovereign AI architectures.</li>
</ul>
<ul>
<li>Serve as the trusted technical advisor to partner CTOs, CIOs, and engineering leaders—shaping joint architectures, guiding GPU/model deployment strategies, and accelerating GTM execution.</li>
</ul>
<ul>
<li>Design reference architectures and deployment patterns for partner-led implementations (e.g., multi-GPU inference clusters, AI Lab topologies, private AI clouds).</li>
</ul>
<ul>
<li>Innovate the Executive Briefing Center (EBC) function for technical leaders (CIOs, CTOs, CDOs), positioning Mistral as the default choice for enterprise AI.</li>
</ul>
<ul>
<li>Co-design sovereign AI reference architectures with Nvidia and Dell (H100, H200, GB200 platforms).</li>
</ul>
<p><strong>Co-Sell &amp; Revenue Enablement</strong></p>
<ul>
<li>Collaborate with Mistral’s partner and sales teams to progress deals, providing technical expertise to penetrate accounts and influence GTM pipeline.</li>
</ul>
<ul>
<li>Support partners in qualifying/disqualifying opportunities, ensuring Mistral solutions unlock maximum value for customers.</li>
</ul>
<ul>
<li>Deploy Mistral’s enterprise AI suite (models, fine-tuning, use-case building) in partner-led environments, tailoring solutions to customer requirements.</li>
</ul>
<p><strong>Trusted Advisor &amp; Lighthouse Implementations</strong></p>
<ul>
<li>Drive strategic partner-led opportunities through technical discovery, architecture design, and POC execution.</li>
</ul>
<ul>
<li>Lead lighthouse deployments that become referenceable case studies (e.g., Singtel AI Grid, Accenture AI Lab).</li>
</ul>
<ul>
<li>Establish a scalable partner enablement framework, training 100+ partner engineers across ASEAN.</li>
</ul>
<p><strong>Product Feedback &amp; Internal Collaboration</strong></p>
<ul>
<li>Coordinate with Mistral’s product and engineering teams to relay partner-specific requirements and feedback.</li>
</ul>
<ul>
<li>Align joint GTM and technical execution between Mistral Science, Partner Engineering, and partner field teams.</li>
</ul>
<p>About You</p>
<p><strong>Must-Have</strong></p>
<ul>
<li>10–15 years’ experience in partner-facing technical sales or solution architecture (e.g., Partner SA, Alliance Architect, Partner Technology Strategist).</li>
</ul>
<ul>
<li>Proven ability to engage C-suite and senior technical stakeholders (CTO, CIO, Chief Architect) in strategic architecture discussions.</li>
</ul>
<ul>
<li>Deep GenAI/LLM expertise: RAG, fine-tuning, prompt engineering, model evaluation, and deployment patterns.</li>
</ul>
<ul>
<li>Technical mastery of AI/ML infrastructure (GPU clusters, cloud platforms, model deployment frameworks).</li>
</ul>
<ul>
<li>Track record of co-designing/deploying joint solutions with ecosystem partners (Nvidia, Dell, AWS, Accenture, etc.).</li>
</ul>
<ul>
<li>Executive communication: Ability to articulate science-driven value propositions to technical and business audiences.</li>
</ul>
<ul>
<li>Entrepreneurial mindset: Operates autonomously in high-growth environments; creates playbooks, not follows them.</li>
</ul>
<ul>
<li>Fluent in English; confident working across diverse, cross-cultural teams in Asia.</li>
</ul>
<p><strong>Nice-to-Have</strong></p>
<ul>
<li>Experience with open-weight LLMs or open-source AI stacks (Mistral, Hugging Face, LangChain, vLLM, RAG frameworks).</li>
</ul>
<ul>
<li>Prior involvement in AI Lab, AI Factory, or Sovereign Cloud deployments.</li>
</ul>
<ul>
<li>Familiarity with data governance, model evaluation, and GPU sizing for large-scale inference.</li>
</ul>
<ul>
<li>Network across GSIs and infrastructure partners in Asia</li>
</ul>
<ul>
<li>Exposure to multi-region partner programs or joint GTM initiatives in APJ.</li>
</ul>
<ul>
<li>Bonus languages: Korean, Japanese, or Mandarin for regional partner engagement.</li>
</ul>
<p>What we offer</p>
<ul>
<li>💰 Competitive cash salary and equity</li>
</ul>
<ul>
<li>🚑 Health Insurance : Best in Class</li>
</ul>
<ul>
<li>🥎 Sport : $90 for gym membership allowance</li>
</ul>
<ul>
<li>🥕 Food : $200 monthly allowance for meals (solution might evolve as we grow bigger)</li>
</ul>
<ul>
<li>🚴 Transportation : $120/month for public transport or Parking charges reimbursed</li>
</ul>
<ul>
<li>🏝️ PTO: 18 per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GenAI/LLM expertise, RAG, fine-tuning, prompt engineering, model evaluation, deployment patterns, AI/ML infrastructure, GPU clusters, cloud platforms, model deployment frameworks, co-designing/deploying joint solutions, ecosystem partners, Nvidia, Dell, AWS, Accenture, open-weight LLMs, open-source AI stacks, Mistral, Hugging Face, LangChain, vLLM, RAG frameworks, data governance, model evaluation, GPU sizing, large-scale inference, GSIs, infrastructure partners, multi-region partner programs, joint GTM initiatives, APJ, Korean, Japanese, Mandarin</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/fe3542b5-4f99-4d62-af6a-fbdfd13bf0e4</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>ce88828f-470</externalid>
      <Title>Solutions Architect, AI and ML</Title>
      <Description><![CDATA[<p>We are building the world&#39;s leading AI company and are looking for an experienced Cloud Solution Architect to help assist customers with adoption of GPU hardware and Software, as well as building and deploying Machine Learning (ML), Deep Learning (DL), data analytics solutions on various Cloud Computing Platforms.</p>
<p>As part of the Solutions Architecture team, we work with some of the most exciting computing hardware and software technologies including the latest breakthroughs in machine learning and data science. A Solutions Architect is the first line of technical expertise between NVIDIA and our customers so you will engage directly with developers, researchers, and data scientists with some of NVIDIA&#39;s most strategic technology customers as well as work directly with business and engineering teams on product strategy.</p>
<p><strong>What you will be doing:</strong></p>
<ul>
<li>Working with Cloud Service Providers to develop and demonstrate solutions based on NVIDIA&#39;s ML/DL and data science software and hardware technologies</li>
</ul>
<ul>
<li>Build and deploy AI/ML solutions at scale using NVIDIA&#39;s AI software on cloud-based GPU platforms.</li>
</ul>
<ul>
<li>Build custom PoCs for solution that address customer&#39;s critical business needs applying NVIDIA hardware and software technology</li>
</ul>
<ul>
<li>Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions</li>
</ul>
<ul>
<li>Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.</li>
</ul>
<ul>
<li>Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues</li>
</ul>
<p><strong>What we need to see:</strong></p>
<ul>
<li>3+ years of Solutions Engineering (or similar Sales Engineering roles) or equivalent experience</li>
</ul>
<ul>
<li>3+ years of work-related experience in Deep Learning and Machine Learning, including deep learning frameworks TensorFlow or PyTorch, GPU, and CUDA experience extremely helpful.</li>
</ul>
<ul>
<li>BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.</li>
</ul>
<ul>
<li>Established track record of deploying solutions in cloud computing environments including AWS, GCP, or Azure</li>
</ul>
<ul>
<li>Knowledge of DevOps/ML Ops technologies such as Docker/containers, Kubernetes, data center deployments</li>
</ul>
<ul>
<li>Ability to use at least one scripting language (i.e., Python)</li>
</ul>
<ul>
<li>Good programming and debugging skills</li>
</ul>
<ul>
<li>Ability to communicate your ideas/code clearly through documents, presentation etc.</li>
</ul>
<p><strong>Ways to stand out from the crowd:</strong></p>
<ul>
<li>AWS, GCP or Azure Professional Solution Architect Certification.</li>
</ul>
<ul>
<li>Hands-on experience with NVIDIA GPUs and SDKs (e.g. CUDA, RAPIDS, Triton etc.)</li>
</ul>
<ul>
<li>System-level experience specifically GPU-based systems</li>
</ul>
<ul>
<li>Experience with Deep Learning at scale</li>
</ul>
<ul>
<li>Familiarity with parallel programming and distributed computing platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Solutions Engineering, Deep Learning and Machine Learning, TensorFlow or PyTorch, GPU and CUDA experience, BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields, DevOps/ML Ops technologies, Docker/containers, Kubernetes, data center deployments, Scripting language (i.e., Python), Good programming and debugging skills, Ability to communicate your ideas/code clearly through documents, presentation etc., AWS, GCP or Azure Professional Solution Architect Certification, Hands-on experience with NVIDIA GPUs and SDKs (e.g. CUDA, RAPIDS, Triton etc.), System-level experience specifically GPU-based systems, Experience with Deep Learning at scale, Familiarity with parallel programming and distributed computing platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a leading technology company that specialises in designing and manufacturing graphics processing units (GPUs) and high-performance computing hardware.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-WA-Redmond/Solutions-Architect--AI-and-ML_JR2000691</Applyto>
      <Location>Redmond, Santa Clara, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>1060dfc7-676</externalid>
      <Title>Solution Architect, Computer Aided Engineering</Title>
      <Description><![CDATA[<p><strong>Solution Architect, Computer Aided Engineering</strong></p>
<p>We are looking for a Solution Architect with deep expertise in AI solutions to drive the efficient use of groundbreaking compute platforms across industries. As a trusted technical advisor to our CAE developers and customers, you will be responsible for embedding NVIDIA software into developers&#39; architectures and workflows.</p>
<p><strong>What you&#39;ll be doing:</strong></p>
<ul>
<li>Support Business Development and Sales teams as part of a team of 4, partnering with Industry Business leads, Account Managers, and Developer Relations managers to drive our developers&#39; ecosystem success.</li>
<li>Work directly with developers and customers in a customer-facing setting.</li>
<li>Support developers in adopting NVIDIA libraries and software frameworks as the foundation for modern AI and data platforms.</li>
<li>Analyze application architectures and find opportunities for acceleration.</li>
<li>Provide feedback and collaborate with engineering, product, and research teams.</li>
<li>Deliver trainings, hackathons, and technical demonstrations on NVIDIA solutions and platforms.</li>
</ul>
<p><strong>What we need to see:</strong></p>
<ul>
<li>A MS/PhD degree in Machine Learning, Computational Science, Physics, or a related technical field.</li>
<li>Minimum of 5 years of technical experience in Physics-Machine Learning.</li>
<li>Experience in engineering simulations (e.g. fluid dynamics, atmospheric science, Computer-Aided Engineering technologies).</li>
<li>Familiarity with accelerated computing platforms and GPU-based distributed systems.</li>
<li>Experience in algorithm programming using languages like Python and C/C++.</li>
<li>Development experience using major AI frameworks (e.g., PyTorch, Tensorflow, and similar tools).</li>
<li>Familiarity with containers, numerical libraries, modular software design, version control, GitHub.</li>
<li>Experience designing, prototyping, and building complex AI/ML-based solutions for customers.</li>
<li>Able to reason across components such as data pipelines, models, compute, networking, and orchestration.</li>
<li>Solid written and oral communications skills and familiarity with collaborative environments.</li>
<li>Great teammate who can learn, react, and adapt quickly with a mentality to work for a fast-paced environment.</li>
</ul>
<p><strong>Ways to stand out from the crowd:</strong></p>
<ul>
<li>Development experience with NVIDIA software libraries and GPUs.</li>
<li>Experience with Kubernetes, distributed training, and large-scale inference.</li>
<li>Experience supporting or utilizing PCIe accelerators such as GPUs, FPGAs, DSPs from evaluation to production stages.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Computational Science, Physics, Python, C/C++, PyTorch, Tensorflow, Containers, Numerical libraries, Modular software design, Version control, GitHub, Kubernetes, Distributed training, Large-scale inference, NVIDIA software libraries, GPU-based distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a technology company that has been transforming computer graphics, PC gaming, and accelerated computing for over 25 years. It has a legacy of innovation and a diverse range of products and services.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/Switzerland-Remote/Solution-Architect--Computer-Aided-Engineering_JR2014310-1</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>f8883394-0fc</externalid>
      <Title>Solutions Architect, AI and ML</Title>
      <Description><![CDATA[<p>We are looking for an experienced Cloud Solution Architect to help assist customers with adoption of GPU hardware and Software, as well as building and deploying Machine Learning (ML) , Deep Learning (DL), data analytics solutions on various Cloud Computing Platforms.</p>
<p>As a Solutions Architect, you will engage directly with developers, researchers, and data scientists with some of NVIDIA’s most strategic technology customers as well as work directly with business and engineering teams on product strategy.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Help cloud customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on cloud ML services and Kubernetes for large language models (LLMs) and generative AI workloads.</li>
<li>Enhance performance tuning using TensorRT/TensorRT-LLM, vLLM, Dynamo, and Triton Inference Server to improve GPU utilization and model efficiency.</li>
<li>Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to cloud customers implementing AI inference at scale.</li>
<li>Build custom PoCs for solution that address customer’s critical business needs applying NVIDIA hardware and software technology</li>
<li>Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions</li>
<li>Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.</li>
<li>Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.</li>
<li>3+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production in cloud computing environments including AWS, GCP, or Azure</li>
<li>3+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow</li>
<li>Excellent knowledge of the theory and practice of LLM and DL inference</li>
<li>Strong fundamentals in programming, optimizations, and software design, especially in Python</li>
<li>Experience with containerization and orchestration technologies like Docker and Kubernetes, monitoring, and observability solutions for AI deployments</li>
<li>Knowledge of Inference technologies - NVIDIA NIM, TensorRT-LLM, Dynamo, Triton Inference Server, vLLM, etc</li>
<li>Proficiency in problem-solving and debugging skills in GPU environments</li>
<li>Excellent presentation, communication and collaboration skills</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>AWS, GCP or Azure Professional Solution Architect Certification.</li>
<li>Experience optimizing and deploying large MoE LLMs at scale</li>
<li>Active contributions to open-source AI inference projects (e.g., vLLM, TensorRT-LLM Dynamo, SGLang, Triton or similar)</li>
<li>Experience with Multi-GPU Multi-node Inference technologies like Tensor Parallelism/Expert Parallelism, Disaggregated Serving, LWS, MPI, EFA/Infiniband, NVLink/PCIe, etc</li>
<li>Experience in developing and integrating monitoring and alerting solutions using Prometheus, Grafana, and NVIDIA DCGM and GPU performance Analysis and tools like NVIDIA Nsight Systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud Solution Architecture, GPU hardware and Software, Machine Learning (ML), Deep Learning (DL), Data Analytics, Cloud Computing Platforms, Kubernetes, TensorRT, TensorRT-LLM, vLLM, Dynamo, Triton Inference Server, Python, Containerization, Orchestration, Monitoring, Observability, Inference technologies, NVIDIA NIM, Problem-solving, Debugging, GPU environments, AWS, GCP, Azure, Professional Solution Architect Certification, Large MoE LLMs, Open-source AI inference projects, Multi-GPU Multi-node Inference technologies, Monitoring and alerting solutions, Prometheus, Grafana, NVIDIA DCGM, GPU performance Analysis, NVIDIA Nsight Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a leading technology company that specializes in designing and manufacturing graphics processing units (GPUs) and high-performance computing hardware.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-WA-Redmond/Solutions-Architect--AI-and-ML_JR2005988-1</Applyto>
      <Location>Redmond, CA, Santa Clara, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>cf4fd05b-818</externalid>
      <Title>Senior Software Engineer, NCCL</Title>
      <Description><![CDATA[<p>We are looking for a highly motivated senior software engineer to join our communication libraries and network software team. The position will be part of a fast-paced crew that develops and maintains software for complex heterogeneous computing systems that power disruptive products in High Performance Computing and Deep Learning.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement and maintain highly-optimized communication runtimes for Deep Learning frameworks (e.g. NCCL for TensorFlow/Pytorch) and HPC programming interfaces (e.g. UCX for MPI/OpenSHMEM) on GPU clusters.</li>
<li>Participate in and contribute to parallel programming interface specifications like MPI/OpenSHMEM.</li>
<li>Design, implement and maintain system software that enables interactions among GPUs and interactions between GPUs and other system components.</li>
<li>Create proof-of-concepts to evaluate and motivate extensions in programming models, new designs in runtimes and new features in hardware.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>M.S./Ph.D. degree in CS/CE or equivalent experience.</li>
<li>5+ years of relevant experience.</li>
<li>Excellent C/C++ programming and debugging skills.</li>
<li>Strong experience with Linux.</li>
<li>Expert understanding of computer system architecture and operating systems.</li>
<li>Experience with parallel programming interfaces and communication runtimes.</li>
<li>Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Deep understanding of technology and passionate about what you do.</li>
<li>Experience with CUDA programming and NVIDIA GPUs.</li>
<li>Knowledge of high-performance networks like InfiniBand, iWARP etc.</li>
<li>Experience with HPC applications.</li>
<li>Experience with Deep Learning Frameworks such PyTorch, TensorFlow, etc.</li>
<li>Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic matrix environment.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Highly competitive salaries.</li>
<li>Comprehensive benefits package.</li>
<li>Eligibility for equity.</li>
<li>Opportunity to work with a world-class engineering team.</li>
<li>Ability to work in a dynamic matrix environment.</li>
<li>Opportunity to contribute to cutting-edge technology.</li>
<li>Flexible work arrangements.</li>
<li>Professional development opportunities.</li>
</ul>
<p><strong>How to Apply:</strong></p>
<p>Applications for this job will be accepted at least until March 13, 2026. NVIDIA uses AI tools in its recruiting processes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C/C++, Linux, Computer system architecture, Operating systems, Parallel programming interfaces, Communication runtimes, CUDA programming, NVIDIA GPUs, High-performance networks, HPC applications, Deep Learning Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a leading developer of graphics processing units (GPUs) and high-performance computing hardware and software. The company&apos;s products are used in a wide range of applications, including artificial intelligence, high-performance computing, and visualization.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-Software-Engineer--GPU-Communications-and-Networking_JR1997186</Applyto>
      <Location>Santa Clara</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b151fcc2-2fb</externalid>
      <Title>Member of Technical Staff, High Performance Computing Engineer</Title>
      <Description><![CDATA[<p>We are looking for experienced Member of Technical Staff, High Performance Computing Engineers to help build and scale the infrastructure that trains our frontier models and powers the next evolution of our personal AI, Copilot.</p>
<p>This role offers the unique opportunity to work on some of the largest scale supercomputers in the world – a rare chance to operate at such a significant scale.</p>
<p><strong>Responsibilities</strong></p>
<p>Design, operate, and maintain large-scale HPC environments, drawing on hands-on engineering experience in production settings.</p>
<p>Own the deployment, configuration, and day-to-day operation of HPC schedulers (e.g., SLURM, Kubernetes), ensuring reliable and efficient job scheduling at scale.</p>
<p>Serve as a technical owner for at least one core HPC domain (GPU compute, high-performance storage, networking, or similar), including ongoing maintenance, performance tuning, and troubleshooting of massive clusters.</p>
<p>Develop and maintain automation and tooling using Bash and/or Python to improve cluster reliability, observability, and operational efficiency.</p>
<p>Partner closely with researchers and engineers to support their workloads, troubleshoot cluster usage issues, and triage failed or underperforming jobs to resolution.</p>
<p>Drive work forward independently by navigating ambiguity and technical roadblocks, delivering incremental improvements that get capabilities into users’ hands quickly.</p>
<p><strong>Qualifications</strong></p>
<p>Do you have a Bachelor’s degree in computer science, or related technical field AND 4+ years technical engineering experience with deploying or operating on-premise or cloud high-performance clusters, AND 4+ years experience working with high-scale training clusters (ex. working with frameworks/tools such as nvidia InfiniBand clusters, SLURM, Kubernetes, Ray, etc.), AND 4+ years experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP, OR equivalent experience?</p>
<p><strong>Preferred Qualifications</strong></p>
<p>Master’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with deploying or operating on-premise or cloud high-performance clusters, AND 6+ years experience working with high-scale training clusters (ex. working with frameworks/tools such as nvidia InfiniBand clusters, SLURM, Kubernetes, Ray, etc.), AND 6+ years experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP, OR equivalent experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>HPC, SLURM, Kubernetes, GPU compute, high-performance storage, networking, Bash, Python, nvidia InfiniBand clusters, Ray, LLM training clusters, AI platforms, Machine Learning frameworks, large-scale HPC or GPU systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that develops and markets software, services, and solutions for personal and business use. It is one of the largest and most influential technology companies in the world.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-high-performance-computing-engineer-mai-superintelligence-team-3/</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>5d37a7c7-d2a</externalid>
      <Title>ML Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>The ML Infrastructure team at Cursor builds large-scale compute, storage, and software infrastructure to support the company&#39;s work building the world&#39;s best agentic coding model. We&#39;re looking for strong engineers who are interested in building high-performance infrastructure and the software to support it. This role works closely with ML researchers and engineers to enable their work through improvements to our training framework, systems reliability/performance, and developer experience.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Collaborate with ML researchers to improve the throughput and reliability of training</li>
<li>Work with OEMs, cloud service providers, and others to plan and build cutting-edge GPU infrastructure</li>
<li>Improve the density and scalability of compute environments to enable increasingly large RL workloads</li>
<li>Create software and systems to automate building, monitoring, and running GPU clusters</li>
<li>Build workload scheduling and data movement systems to support Cursor&#39;s growing training footprint</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>A strong background in systems and infrastructure-focused software engineering, particularly in Python, Typescript, Rust, and Golang</li>
<li>Experience with distributed storage and networking infrastructure, particularly on Linux systems across cloud and bare metal environments</li>
<li>Exposure to large-scale systems and their unique challenges, ideally across thousands of nodes with significant resource footprints</li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Operational exposure to Nvidia GPUs with Infiniband or RoCE, particularly with Blackwell and Hopper-class hardware</li>
<li>Exposure to Ray, Slurm, or other common compute and runtime schedulers</li>
</ul>
<p>Name<em> Email</em> ↥ Upload file LinkedIn URL GitHub Profile</p>
<p>Please write a short note on a project you&#39;re proud of:</p>
<p>Will you now or in the future require visa sponsorship to work in the country where this position is located?</p>
<p>Has someone at Cursor referred you for this role? If so, please include their email here</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Typescript, Rust, Golang, Distributed storage, Networking infrastructure, Linux systems, Kubernetes, Nvidia GPUs, Infiniband, RoCE, Blackwell, Hopper-class hardware, Ray, Slurm</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology organisation building the world&apos;s best agentic coding model. The company has a large-scale compute, storage, and software infrastructure to support its work.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-ml-infrastructure</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d3f7efba-55c</externalid>
      <Title>Simulation Environments Engineer</Title>
      <Description><![CDATA[<p><strong>Simulation Environments Engineer</strong></p>
<p><strong>About the Team</strong></p>
<p>Our Robotics team is focused on unlocking general-purpose robotics and pushing towards AGI-level intelligence in dynamic, real-world settings. Working across the entire model stack, we integrate cutting-edge hardware and software to explore a broad range of robotic form factors. We strive to seamlessly blend high-level AI capabilities with the constraints of physical systems to improve peoples’ lives.</p>
<p><strong>About the Role</strong></p>
<p>We are hiring a <strong>Simulation Environments Engineer</strong> to build the tooling and infrastructure that enable high-coverage, realistic virtual environments for robotics research and evaluation. This role is focused on _creating the systems_ (not necessarily hand-crafting every asset) that let researchers and engineers describe, visualize, generate, and validate task environments at scale. You will design pipelines for importing and vetting 3rd-party content, author procedural and randomized scenario generators, and ship ergonomic tools that make environment creation fast, repeatable, and testable. This role sits at the intersection of game-engine practice, asset engineering, and large-scale simulation infrastructure.</p>
<p><strong>This role is based in San Francisco, CA, and requires in-person 3 days a week.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build interactive and programmatic tooling to describe, preview, and validate scenes and tasks so researchers can author scenarios quickly and repeatedly.</li>
</ul>
<ul>
<li>Create content pipelines to curate, convert, optimize and quality-check assets (visual + collision) from third-party collections and internal sources; define standards so assets behave predictably across engines and tasks.</li>
</ul>
<ul>
<li>Implement robust importers and adapters that bring environments and setups from Isaac/Unity/Unreal/Omniverse/other repos into our sim pipelines while preserving fidelity and ensuring performance.</li>
</ul>
<ul>
<li>Build frameworks for procedural generation and controlled randomization (visual, physical, kinematic) so models see a systematic, measurable variety of conditions.</li>
</ul>
<ul>
<li>Define and enforce quality gates for environments (visual fidelity, collision correctness, physical plausibility) and instrument validation tooling so environments meet realism/coverage goals.</li>
</ul>
<ul>
<li>Connect environment tooling to CI/CD, presubmit checks, large-scale simulation farms and model-eval pipelines so environments can be tested automatically and run at scale. (You’ll partner with sim-pipelines and sim-realism owners.)</li>
</ul>
<ul>
<li>Create processes and templates to onboard new object libraries and contracted asset work; provide clear acceptance tests and automation for vendor deliverables.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Enjoy building ergonomic tooling that empowers other engineers and researchers to produce high-quality environments quickly.</li>
</ul>
<ul>
<li>Have practical experience with modern world engines (NVIDIA Isaac Sim, Unity, Unreal Engine, Omniverse) or equivalent production pipelines and can choose and integrate the right platform for each use case.</li>
</ul>
<ul>
<li>Are comfortable with the full content pipeline: CAD/asset import, USD/GLTF/FBX/texture workflows, collision mesh generation, LODs, and material/physics metadata.</li>
</ul>
<ul>
<li>Have built or used procedural generation and domain randomization systems to produce broad, task-relevant variability.</li>
</ul>
<ul>
<li>Care about quality control and validation — you like to design automated checks and visual/quantitative diagnostics that ensure environments are correct and performant.</li>
</ul>
<ul>
<li>Can collaborate across functions — you’ll work closely with researchers, physics/realism engineers, SWE/RE, and vendors to ensure environments are both realistic and actionable for ML training and evaluation.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>modern world engines, NVIDIA Isaac Sim, Unity, Unreal Engine, Omniverse, procedural generation, domain randomization, CAD/asset import, USD/GLTF/FBX/texture workflows, collision mesh generation, LODs, material/physics metadata, game-engine practice, asset engineering, large-scale simulation infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a private company with a large team of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/39cd0dd8-520d-4932-80bf-7495a1d1d11b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f2722128-3e2</externalid>
      <Title>Inference Runtime, Engineering Manager</Title>
      <Description><![CDATA[<p><strong>Inference Runtime, Engineering Manager</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$455K – $555K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprise and developers alike to use and access our start-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for an engineering leader who wants to build and lead the worlds leading AI systems and modeling engineers who take the world&#39;s largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research environment.</p>
<p>In this role, you will:</p>
<ul>
<li>Lead a team of engineers who are experts in working with distributed systems, with a deep understanding of model architecture, system co-design with research and production team,</li>
</ul>
<ul>
<li>Work alongside partners in machine learning researchers, engineers, and product managers to bring our latest technologies into production.</li>
</ul>
<ul>
<li>Work in an outcome-oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.</li>
</ul>
<ul>
<li>Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our model inference stack.</li>
</ul>
<ul>
<li>Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues.</li>
</ul>
<ul>
<li>Optimize our code and fleet of GPU’s to utilize every FLOP and every GB of GPU RAM of our hardware.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for inference.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Have at least 15 years of professional software engineering experience.</li>
</ul>
<ul>
<li>Have or can quickly gain familiarity with PyTorch, NVidia GPUs and the software stacks that optimize them (e.g. NCCL, CUDA), as well as HPC technologies such as InfiniBand, MPI, NVLink, etc.</li>
</ul>
<ul>
<li>Have experience architecting, building, observing, and debugging production distributed systems. Bonus point if worked on performance-critical distributed systems.</li>
</ul>
<ul>
<li>Have needed to rebuild or substantially refactor production systems several times over due to rapidly increasing scale.</li>
</ul>
<ul>
<li>Are self-directed and enjoy figuring out the most important problem to work on.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$455K – $555K</Salaryrange>
      <Skills>PyTorch, NVidia GPUs, NCCL, CUDA, InfiniBand, MPI, NVLink, HPC technologies, Distributed systems, Model architecture, System co-design, Machine learning, Research, Production, Software engineering, GPU optimization, HPC technologies, Distributed systems, Model architecture, System co-design, Machine learning, Research, Production, Software engineering, GPU optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4f998abb-4510-4bd3-9922-161599625171</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>d5390946-539</externalid>
      <Title>Software Engineer, Model Inference</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Model Inference</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $555K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprise and developers alike to use and access our start-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for an engineer who wants to take the world&#39;s largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research environment.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Work alongside machine learning researchers, engineers, and product managers to bring our latest technologies into production.</li>
</ul>
<ul>
<li>Work alongside researchers to enable advanced research through awesome engineering.</li>
</ul>
<ul>
<li>Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our model inference stack.</li>
</ul>
<ul>
<li>Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues.</li>
</ul>
<ul>
<li>Optimize our code and fleet of Azure VMs to utilize every FLOP and every GB of GPU RAM of our hardware.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for inference.</li>
</ul>
<ul>
<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>
</ul>
<ul>
<li>Have at least 5 years of professional software engineering experience.</li>
</ul>
<ul>
<li>Have or can quickly gain familiarity with PyTorch, NVidia GPUs and the software stacks that optimize them (e.g. NCCL, CUDA), as well as HPC technologies such as InfiniBand, MPI, NVLink, etc.</li>
</ul>
<ul>
<li>Have experience architecting, building, observing, and debugging production distributed systems. Bonus point if worked on performance-critical distributed systems.</li>
</ul>
<ul>
<li>Have needed to rebuild or substantially refactor production systems several times over due to rapidly increasing scale.</li>
</ul>
<ul>
<li>Are self-directed and enjoy figuring out the most important problem to work on.</li>
</ul>
<ul>
<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $555K • Offers Equity</Salaryrange>
      <Skills>PyTorch, NVidia GPUs, NCCL, CUDA, HPC technologies, InfiniBand, MPI, NVLink, Azure VMs, GPU RAM, FLOP, modern ML architectures, intuition for optimizing performance, distributed systems, performance-critical distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/83b6755d-7785-4186-9050-5ef3ad127941</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a6f2cc66-67b</externalid>
      <Title>Networking Operating System Firmware Engineer</Title>
      <Description><![CDATA[<p><strong>Networking Operating System Firmware Engineer</strong></p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>
<p><strong>About the Role</strong></p>
<p>We’re seeking a Networking Operating System Firmware Engineer to help bootstrap and scale the switching layer of our AI supercomputers. In this role, you’ll build and maintain custom SONiC NOS images from scratch, working across the Linux kernel, switch ASIC SAI/SDKs, platform drivers, control-plane services, and orchestration layers.</p>
<p>You will validate, configure, and optimize switch platforms used across our high-bandwidth cluster fabric, ensuring performance, reliability, availability, and seamless integration with fleet automation. You’ll collaborate with hardware and systems teams and guide vendors to meet stringent technical expectations.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, develop, and maintain custom SONiC NOS images for large-scale bleeding-edge AI fabrics.</li>
</ul>
<ul>
<li>Integrate and configure Linux kernel components, device drivers, switch ASIC SDKs, and SAI layers.</li>
</ul>
<ul>
<li>Bring up new switch platforms (thermal/fan control, power monitoring, transceiver management, watchdogs, OSFP CMIS, LEDs, CPLDs, etc.).</li>
</ul>
<ul>
<li>Extend and customize SONiC services for routing, telemetry, control-plane state, and distributed automation.</li>
</ul>
<ul>
<li>Work with hardware teams to validate ASIC configurations, link bring-up, SerDes tuning, buffer profiles, and performance baselines.</li>
</ul>
<ul>
<li>Evaluate switch silicon SDK releases, track vendor deliverables, and define platform requirements with vendors and ASIC partners.</li>
</ul>
<ul>
<li>Debug complex issues spanning kernel, platform drivers, SONiC dockers, routing agents, orchestration services, hardware signals, and network topology.</li>
</ul>
<ul>
<li>Integrate switches into fleet-wide monitoring, remote diagnostics, telemetry pipelines, and automated lifecycle workflows.</li>
</ul>
<ul>
<li>Develop robust CI/build pipelines for reproducible NOS builds and controlled rollout across the fleet.</li>
</ul>
<ul>
<li>Support factory bring-up and qualification all the way through mass deployment.</li>
</ul>
<ul>
<li>Collaborate, architect, implement, and deploy novel networking protocols and technologies to achieve maximum performance and reliability at AI factory scale.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Proven experience working with SONiC or comparable NOS stacks (FBOSS, Cumulus Linux, Arista EOS, Junos PFE-level integration, etc.).</li>
</ul>
<ul>
<li>Experience with updating OpenConfig gNMI interfaces and YANG data models.</li>
</ul>
<ul>
<li>Strong background in Linux kernel, network device drivers, and low-level OS internals.</li>
</ul>
<ul>
<li>Experience integrating Broadcom / Marvell / NVIDIA / Intel ASIC SDKs and SAI implementations.</li>
</ul>
<ul>
<li>Proficiency in C, C++ and Python; familiarity with Rust/Go is a plus.</li>
</ul>
<ul>
<li>Deep understanding of L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, and telemetry.</li>
</ul>
<ul>
<li>Hands-on experience with hardware platform bring-up and board-level debugging.</li>
</ul>
<ul>
<li>Familiarity with CI/CD pipelines, distributed config/state management, and large-scale automation.</li>
</ul>
<ul>
<li>Strong cross-functional problem solving in high-performance, distributed environments.</li>
</ul>
<ul>
<li>Ability to lead teams to deliver a project end to end.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$266K – $445K</Salaryrange>
      <Skills>SONiC, Linux kernel, network device drivers, low-level OS internals, C, C++, Python, Rust/Go, L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, telemetry, OpenConfig gNMI interfaces, YANG data models, Broadcom / Marvell / NVIDIA / Intel ASIC SDKs, SAI implementations, CI/CD pipelines, distributed config/state management, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all. It is a privately held company with a large team of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/582b878e-61bf-4be2-8b30-623434baf726</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f8953efe-b98</externalid>
      <Title>Member of Technical Staff, Evaluations Engineering</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Evaluations Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a highly skilled and experienced engineer to join our Evaluations Engineering team. As a Member of Technical Staff, Evaluations Engineer, you will be responsible for developing and tuning the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures. You will also be responsible for benchmarking GB200 and AMD MIxxx GPU clusters, gathering data and insights to develop the pretraining compute roadmap, and caring deeply about conversational AI and its deployment.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>
<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>
<li>Gather data and insights to develop the pretraining compute roadmap.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with generative AI.</li>
<li>Experience with distributed computing.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Software Engineering IC6 – The typical base pay range for this role across the U.S. is USD $163,000 – $296,400 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with benchmarking GPU clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that aim to make a positive impact on people&apos;s lives. Microsoft AI is committed to advancing the field of AI and making it more accessible to everyone.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-evaluations-engineering-mai-superintelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>046c8733-208</externalid>
      <Title>Member of Technical Staff, Evaluations Engineering</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Evaluations Engineer to help build the next wave of capabilities of our personalized AI assistant, Copilot. We’re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a highly skilled and experienced Evaluations Engineer to join our team. As an Evaluations Engineer, you will be responsible for developing and tuning the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures. You will also be responsible for benchmarking GB200 and AMD MIxxx GPU clusters, gathering data and insights to develop the pretraining compute roadmap, and caring deeply about conversational AI and its deployment.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>
<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>
<li>Gather data and insights to develop the pretraining compute roadmap.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with generative AI.</li>
<li>Experience with distributed computing.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with conversational AI and its deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that aim to make a positive impact on people&apos;s lives. Microsoft AI is committed to advancing the field of AI and making it more accessible to everyone.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-evaluations-engineering-mai-superintelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>675d41e9-5f9</externalid>
      <Title>Member of Technical Staff, Reinforcement Learning Systems</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>
<p><strong>About the Role</strong></p>
<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>
<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>
<li>Gather data and insights to develop the pretraining compute roadmap.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with generative AI.</li>
<li>Experience with distributed computing.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>A high degree of craftsmanship and pay close attention to details.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with GPU clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is dedicated to advancing artificial intelligence and machine learning. They are responsible for developing and deploying AI models that power various products and services, including Copilot and Bing. Microsoft AI is committed to creating AI that amplifies human potential while ensuring humanity remains firmly in control.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team-3/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e20bc29d-085</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer at their Redmond office. This role sits at the heart of building the next generation platform for Bing and Microsoft AI. You&#39;ll work directly with stakeholders to determine requirements, lead the identification of dependencies and the development of design documents, and drive project plans, release plans, and work items.</p>
<p><strong>About the Role</strong></p>
<p>The Microsoft AI Web Data team is looking for a Principal Software Engineer to help us build the next generation platform for Bing and Microsoft AI. In Web Data, we are on a mission to build the most vast, safe, and accurate model of the Web to power search and AI. We are pushing frontiers of scalability and index quality by creating models and systems for discovering, storing, processing Web content, protecting our users &amp; platform from Spam, Scams, and malware by keeping a step ahead of bad actors, and operating AI solutions.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Partner with stakeholders within Safety, Web Data, and partner teams, to determine requirements, lead the identification of dependencies and the development of design documents, and drive project plans, release plans, and work items.</li>
<li>Lead by example, and mentor other engineers to produce extensible, scalable, high performance, resilient, and maintainable design and code.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with processing Terabyte to Petabyte scale data with efficient algorithms for feature engineering, and experience with optimizing for high inference ROI and deploying AI/ML models including, but not limited to, Decision Tree and Forest models, encoder only and generative LLM/SLM models, multi-modal models, on NVIDIA, AMD, TPU or equivalent accelerators.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Terabyte to Petabyte scale data, AI/ML models, Decision Tree and Forest models, encoder only and generative LLM/SLM models, multi-modal models, NVIDIA, AMD, TPU, equivalent accelerators</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that empowers every person and every organization on the planet to achieve more. They come together with a growth mindset, innovate to empower others, and collaborate to realize their shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-34/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a40437fb-92e</externalid>
      <Title>Member of Technical Staff, Reinforcement Learning Systems</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>
<p><strong>About the Role</strong></p>
<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>
<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>
<li>Gather data and insights to develop the pretraining compute roadmap.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with generative AI.</li>
<li>Experience with distributed computing.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>A high degree of craftsmanship and pay close attention to details.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed Computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with large-scale reinforcement learning systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is dedicated to advancing artificial intelligence and machine learning. They are responsible for developing and deploying AI models that power various products and services, including Copilot and Bing. Microsoft AI is committed to making AI more accessible and beneficial to society.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>b0dff67a-5b5</externalid>
      <Title>Member of Technical Staff, Reinforcement Learning Systems</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Reinforcement Learning Systems to help build the world&#39;s most advanced reinforcement learning systems. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology.</p>
<p><strong>About the Role</strong></p>
<p>We are responsible for designing, developing, and operating the large-scale reinforcement learning systems that power several use cases across the Superintelligence team. We are looking for individuals who can contribute to cutting-edge research and help bridge the gap between cutting-edge research and robust, production-grade distributed systems. The ideal candidate has both distributed systems expertise and a scientific mindset and will be able to build complex and scalable systems from the ground up, identify and resolve performance bottlenecks, debug complex, cross-system issues with extremely high attention to detail, and contribute to solving scientific and research challenges.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and tune the pretraining scalable software for Nvidia GB200 72NVL CX8 and AMD MIxxx architectures.</li>
<li>Benchmark GB200 and AMD MIxxx GPU clusters.</li>
<li>Gather data and insights to develop the pretraining compute roadmap.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with generative AI.</li>
<li>Experience with distributed computing.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>A high degree of craftsmanship and pay close attention to details.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Generative AI, Distributed computing, Experience with Nvidia GB200 72NVL CX8 and AMD MIxxx architectures, Experience with GPU clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is dedicated to advancing artificial intelligence and machine learning. They are responsible for developing and deploying AI models that power various products and services, including Copilot and Bing. Microsoft AI is committed to creating AI that amplifies human potential while ensuring humanity remains firmly in control.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-reinforcement-learning-systems-mai-superintelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>356892b1-542</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Suzhou office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking an expert Senior GPU Engineer to join our AI Infrastructure team. In this role, you will architect and optimize the core inference engine that powers our large-scale AI models. You will be responsible for pushing the boundaries of hardware performance, reducing latency, and maximizing throughput for Generative AI and Deep Learning workloads. You will work at the intersection of Deep Learning algorithms and low-level hardware, designing custom operators and building a highly efficient training/inference execution engine from the ground up.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Custom Operator Development: Design and implement highly optimized GPU kernels (CUDA/Triton) for critical deep learning operations (e.g., FlashAttention, GEMM, LayerNorm) to outperform standard libraries.</li>
<li>Inference Engine Architecture: Contribute to the development of our high-performance inference engine, focusing on graph optimizations, operator fusion, and dynamic memory management (e.g., KV Cache optimization).</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Expertise in the CUDA programming model and NVIDIA GPU architectures (specifically Ampere/Hopper).</li>
<li>Deep understanding of the memory hierarchy (Shared Memory, L2 cache, Registers), warp-level primitives, occupancy optimization, and bank conflict resolution.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Proven ability to navigate and modify complex, large-scale codebases (e.g., PyTorch internals, Linux kernel).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Starting January 26, 2026, Microsoft AI employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>
<li>Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or protected veteran status.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, CUDA, NVIDIA GPU architectures, Deep Learning algorithms, low-level hardware, PyTorch, Linux kernel</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-18/</Applyto>
      <Location>Suzhou</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>