<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>859cb1cf-b9c</externalid>
      <Title>Senior AI Infrastructure Engineer, Model Serving Platform</Title>
      <Description><![CDATA[<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>
<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>
<li>Build an internal platform to empower LLM capability discovery.</li>
<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>
<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>
<li>Develop monitoring and observability solutions to ensure system health and performance.</li>
<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>5+ years of experience building large-scale, high-performance backend systems.</li>
<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>
<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>
<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>
<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>
<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>
<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4520320005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2bc6ae79-8ee</externalid>
      <Title>Staff Technical Lead for Inference &amp; ML Performance</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Technical Lead for Inference &amp; ML Performance to guide a team in building and optimizing state-of-the-art inference systems. This role is intense yet deeply impactful.</p>
<p>You&#39;ll shape the future of fal&#39;s inference engine and ensure our generative models achieve best-in-class performance. Your work directly impacts our ability to rapidly deliver cutting-edge creative solutions to users, from individual creators to global brands.</p>
<p>Day-to-day, you&#39;ll set technical direction, guide your team to build high-performance inference solutions, and personally contribute to critical inference performance enhancements and optimizations. You&#39;ll collaborate closely with research &amp; applied ML teams, influence model inference strategies and deployment techniques, and drive advanced performance optimizations.</p>
<p>As a leader, you&#39;ll mentor and scale your team, coach and expand your team of performance-focused engineers, and help them innovate, solve complex performance challenges, and level up their skills.</p>
<p>To succeed in this role, you&#39;ll need to be deeply experienced in ML performance optimization, understand the full ML performance stack, and know inference inside-out. You&#39;ll also need to thrive in cross-functional collaboration and have excellent leadership skills.</p>
<p>If you&#39;re ready to lead the future of inference performance at a fast-paced, high-growth frontier, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ML performance optimization, PyTorch, TensorRT, TransformerEngine, Triton, CUTLASS kernels, Quantization, Kernel authoring, Compilation, Model parallelism, Distributed serving, Profiling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>fal is a fast-growing company pioneering the next generation of generative-media infrastructure.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4012780009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8d359571-77e</externalid>
      <Title>Lead Software Engineer, Runtime</Title>
      <Description><![CDATA[<p>As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment.</p>
<p>The role involves architecting and optimizing the inference for high-volume, low-latency, and high-availability environments, leading the acquisition and automation of benchmarks, collaborating with cross-functional teams, and innovating solutions to enhance our AI-powered applications.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting and optimizing the inference for high-volume, low-latency, and high-availability environments</li>
<li>Leading the acquisition and automation of benchmarks at both micro and macro scales</li>
<li>Introducing new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack</li>
<li>Building tools to identify bottlenecks and sources of instability, and designing solutions to address them</li>
<li>Collaborating with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production</li>
<li>Optimizing code and infrastructure to maximize hardware utilization and efficiency</li>
<li>Mentoring and guiding team members, fostering a culture of collaboration, innovation, and continuous learning</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization</li>
<li>Deep understanding of modern ML architectures and experience with performance optimization for inference</li>
<li>Proven track record with large-scale distributed systems, particularly performance-critical ones</li>
<li>Familiarity with PyTorch, TensorRT, CUDA, NCCL</li>
<li>Strong grasp of infrastructure, continuous integration, and continuous development principles</li>
<li>Ability to lead and mentor team members, driving projects from concept to implementation</li>
<li>Results-oriented mindset with a bias towards flexibility and impact</li>
<li>Passion for staying ahead of emerging technologies and applying them to AI-driven solutions</li>
<li>Humble attitude, eagerness to help colleagues, and a desire to see the team succeed</li>
</ul>
<p>Our Culture</p>
<p>We&#39;re driven to build a strong company culture and are looking for individuals with solid alignment with the following:</p>
<ul>
<li>Reason with rigor</li>
<li>Are you audacious enough?</li>
<li>Make our customers succeed</li>
<li>Ship early and accelerate</li>
<li>Leave your ego aside</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, PyTorch, TensorRT, CUDA, NCCL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/0593f273-44f5-4c20-a84c-0406d5da6a0b</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>f8883394-0fc</externalid>
      <Title>Solutions Architect, AI and ML</Title>
      <Description><![CDATA[<p>We are looking for an experienced Cloud Solution Architect to help assist customers with adoption of GPU hardware and Software, as well as building and deploying Machine Learning (ML) , Deep Learning (DL), data analytics solutions on various Cloud Computing Platforms.</p>
<p>As a Solutions Architect, you will engage directly with developers, researchers, and data scientists with some of NVIDIA’s most strategic technology customers as well as work directly with business and engineering teams on product strategy.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Help cloud customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on cloud ML services and Kubernetes for large language models (LLMs) and generative AI workloads.</li>
<li>Enhance performance tuning using TensorRT/TensorRT-LLM, vLLM, Dynamo, and Triton Inference Server to improve GPU utilization and model efficiency.</li>
<li>Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to cloud customers implementing AI inference at scale.</li>
<li>Build custom PoCs for solution that address customer’s critical business needs applying NVIDIA hardware and software technology</li>
<li>Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions</li>
<li>Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.</li>
<li>Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.</li>
<li>3+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production in cloud computing environments including AWS, GCP, or Azure</li>
<li>3+ years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow</li>
<li>Excellent knowledge of the theory and practice of LLM and DL inference</li>
<li>Strong fundamentals in programming, optimizations, and software design, especially in Python</li>
<li>Experience with containerization and orchestration technologies like Docker and Kubernetes, monitoring, and observability solutions for AI deployments</li>
<li>Knowledge of Inference technologies - NVIDIA NIM, TensorRT-LLM, Dynamo, Triton Inference Server, vLLM, etc</li>
<li>Proficiency in problem-solving and debugging skills in GPU environments</li>
<li>Excellent presentation, communication and collaboration skills</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>AWS, GCP or Azure Professional Solution Architect Certification.</li>
<li>Experience optimizing and deploying large MoE LLMs at scale</li>
<li>Active contributions to open-source AI inference projects (e.g., vLLM, TensorRT-LLM Dynamo, SGLang, Triton or similar)</li>
<li>Experience with Multi-GPU Multi-node Inference technologies like Tensor Parallelism/Expert Parallelism, Disaggregated Serving, LWS, MPI, EFA/Infiniband, NVLink/PCIe, etc</li>
<li>Experience in developing and integrating monitoring and alerting solutions using Prometheus, Grafana, and NVIDIA DCGM and GPU performance Analysis and tools like NVIDIA Nsight Systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud Solution Architecture, GPU hardware and Software, Machine Learning (ML), Deep Learning (DL), Data Analytics, Cloud Computing Platforms, Kubernetes, TensorRT, TensorRT-LLM, vLLM, Dynamo, Triton Inference Server, Python, Containerization, Orchestration, Monitoring, Observability, Inference technologies, NVIDIA NIM, Problem-solving, Debugging, GPU environments, AWS, GCP, Azure, Professional Solution Architect Certification, Large MoE LLMs, Open-source AI inference projects, Multi-GPU Multi-node Inference technologies, Monitoring and alerting solutions, Prometheus, Grafana, NVIDIA DCGM, GPU performance Analysis, NVIDIA Nsight Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a leading technology company that specializes in designing and manufacturing graphics processing units (GPUs) and high-performance computing hardware.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-WA-Redmond/Solutions-Architect--AI-and-ML_JR2005988-1</Applyto>
      <Location>Redmond, CA, Santa Clara, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>db67438e-963</externalid>
      <Title>Director, System Software Engineering - Metropolis Accelerated and Inferencing Software</Title>
      <Description><![CDATA[<p><strong>Director, System Software Engineering - Metropolis Accelerated and Inferencing Software</strong></p>
<p>We are looking for an engineering leader who is hands-on with deep learning—comfortable reading/modeling code, not just running it. You will lead, encourage, and develop world-class engineering and data teams distributed across Europe, Asia and the United States.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Architect and operationalize NVIDIA’s end-to-end data Inference Acceleration strategy, powering Inferencing and continuous performance improvements.</li>
<li>Drive Strategic Implementations of TensorRT, VLLM and other accelerated frameworks for inference solutions for Edge and Enterprise devices: Lead Accelerated Computing efforts and solutions for key Metropolis verticals. Set up Proofs of Readiness (PORs) and guide their implementations.</li>
<li>Leading customer solutions: Collaborate with major Metropolis OEMs and Partners to architect highly accelerated and optimized custom deep learning models and inference pipelines for their specific requirements. Offer direct customer support, including debugging, technical education, and handling customer inquiries for our Metropolis partner and customers. Responsible for drafting and finalizing SOWs with internal customers and partners.</li>
<li>Performance Benchmarking: Orchestrate efforts to achieve leading performance results on industry benchmarks like MLPerf on various edge and Enterprise devices.</li>
<li>Technical Leadership &amp; Influence: Function as a technical leader for deep learning across multiple teams, giving oversight and build support. Apply customer insights to influence the composition and structure of upcoming SOC / GPU deep learning hardware.</li>
<li>Scaling the team: Strategically hiring to meet new demands while also mentoring and adjusting existing teams to new deep learning challenges.</li>
<li>Representing Nvidia Deep learning solutions in webinars, conferences and partner events</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Masters in Computer Science/Electrical Engineering or equivalent experience.</li>
<li>A minimum of 8 years of meaningful involvement in machine learning/deep learning research or practical experience, coupled with 7+ years of leadership background and overall 15+ years of industry experience.</li>
<li>Over 10 years of validated expertise in the embedded software sector, holding technical leadership positions accountable for delivering outstanding production software within a multifaceted setting.</li>
<li>Deep Knowledge of GPU, CPU and dedicated deep learning architecture fundamentals and low-level performance optimizations using heterogeneous computing.</li>
<li>Hands-on experience with VLMs, LLMs, or multimodal AI systems applied to perception, data triage, or automated labeling.</li>
<li>Strong expertise in large-scale data processing, systems build, or machine learning pipelines.</li>
<li>Strong communication, careful planning, and technical leadership capabilities.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary package and benefits</li>
<li>Eligible for equity</li>
</ul>
<p><strong>How to Apply:</strong></p>
<p>Applications for this job will be accepted at least until March 13, 2026.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Deep Learning, GPU, CPU, Heterogeneous Computing, TensorRT, VLLM, Proof of Readiness, Customer Support, Technical Education, Performance Benchmarking, Technical Leadership, Team Scaling, Webinars, Conferences, Partner Events, VLMs, LLMs, Multimodal AI Systems, Perception, Data Triage, Automated Labeling, Large-Scale Data Processing, Systems Build, Machine Learning Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>NVIDIA</Employername>
      <Employerlogo>https://logos.yubhub.co/nvidia.com.png</Employerlogo>
      <Employerdescription>NVIDIA is a world leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, medical devices, and more.</Employerdescription>
      <Employerwebsite>https://nvidia.wd5.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Director--Metropolis-Accelerated-and-Inferencing-Software_JR2011299</Applyto>
      <Location>Santa Clara</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>d3a39f4c-d95</externalid>
      <Title>Software Engineer, Inference - Multi Modal</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Inference - Multi Modal</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $555K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we partner closely with Research to bring the next generation of models into the world. We&#39;re a small, fast-moving team of engineers focused on delivering a world-class developer experience while pushing the boundaries of what AI can do.</p>
<p>We’re expanding into multimodal inference, building the infrastructure needed to serve models that handle image, audio, and other non-text modalities. These workloads are inherently more heterogeneous and experimental, involving diverse model sizes and interactions, more complex input/output formats, and tighter coordination with product and research.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a software engineer to help us serve OpenAI’s multimodal models at scale. You’ll be part of a small team responsible for building reliable, high-performance infrastructure for serving real-time audio, image, and other MM workloads in production.</p>
<p>This work is inherently cross-functional: you’ll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. You&#39;ll build and optimize the systems that let users generate speech, understand images, and interact with models in ways far beyond text.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and implement inference infrastructure for large-scale multimodal models.</li>
</ul>
<ul>
<li>Optimize systems for high-throughput, low-latency delivery of image and audio inputs and outputs.</li>
</ul>
<ul>
<li>Enable experimental research workflows to transition into reliable production services.</li>
</ul>
<ul>
<li>Collaborate closely with researchers, infra teams, and product engineers to deploy state-of-the-art capabilities.</li>
</ul>
<ul>
<li>Contribute to system-level improvements including GPU utilization, tensor parallelism, and hardware abstraction layers.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience building and scaling inference systems for LLMs or multimodal models.</li>
</ul>
<ul>
<li>Have worked with GPU-based ML workloads and understand the performance dynamics of large models, especially with complex data like images or audio.</li>
</ul>
<ul>
<li>Enjoy experimental, fast-evolving work and collaborating closely with research.</li>
</ul>
<ul>
<li>Are comfortable dealing with systems that span networking, distributed compute, and high-throughput data handling.</li>
</ul>
<ul>
<li>Have familiarity with inference tooling like vLLM, TensorRT-LLM, or custom model parallel systems.</li>
</ul>
<ul>
<li>Own problems end-to-end and are excited to operate in ambiguous, fast-moving spaces.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience working with image generation or audio synthesis models in production.</li>
</ul>
<ul>
<li>Exposure to distributed ML training or system-efficient model design.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $555K • Offers Equity</Salaryrange>
      <Skills>Software Engineer, Inference Infrastructure, GPU-based ML Workloads, Tensor Parallelism, Hardware Abstraction Layers, vLLM, TensorRT-LLM, Custom Model Parallel Systems, Image Generation, Audio Synthesis, Distributed ML Training, System-Efficient Model Design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4d14449e-5e7f-45d4-b103-8776a6c87086</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>61433df5-3e7</externalid>
      <Title>Member of Technical Staff, Multimodal Infrastructure</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Multimodal Infrastructure to help build the next wave of capabilities of our personalized AI assistant, Copilot. We&#39;re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a highly skilled and experienced engineer to join our team as a Member of Technical Staff, Multimodal Infrastructure. The successful candidate will be responsible for designing, developing, and maintaining large-scale multimodal data processing pipelines, model pretraining and post-training frameworks, and model inference and serving frameworks. They will work closely with research scientists and product engineers to solve infra-related problems and find a path to get things done despite roadblocks to get their work into the hands of users quickly and iteratively.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design, develop, and maintain large-scale multimodal data processing pipelines.</li>
<li>Design, develop, and maintain large-scale multimodal model pretraining and post-training frameworks.</li>
<li>Design, develop, and maintain large-scale multimodal model inference and serving frameworks.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Strong proficiency in distributed data processing infra (resource utilization management, fault tolerance, ray &amp; spark) and CPU/GPU batch processing optimizations.</li>
<li>Experience with state-of-art model inference and serving frameworks.</li>
<li>Experience with image/video/audio data processing.</li>
<li>Experience with common data formats for efficient I/O.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
<li>Access to cutting-edge technology and tools.</li>
<li>Flexible work arrangements.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Distributed data processing infra, CPU/GPU batch processing optimizations, State-of-art model inference and serving frameworks, Image/video/audio data processing, Common data formats for efficient I/O, Ray &amp; spark, TensorRT-LLM, SGLang, xDiT, Cache-DiT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that aim to make a positive impact on people&apos;s lives. Microsoft AI is committed to advancing the field of AI and making it more accessible to everyone.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-multimodal-infrastructure-mai-superintelligence-team-3/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>7f56054b-d77</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer at their Mountain View office. This role sits at the heart of strategic decision-making, driving innovations in AI infrastructure. You&#39;ll work directly with key partners to understand, design, and implement complex inferencing capabilities for state-of-the-art deep learning models.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will be responsible for engaging directly with key partners to understand, design, and implement complex inferencing capabilities for state-of-the-art deep learning models. You will work with cutting-edge hardware and software stacks to deliver best-in-class inference performance while optimizing for cost, leveraging open-source projects to advance deep learning applications. You will collaborate with external and internal teams to identify new areas for improvement and contribute to innovations that enhance model performance and deployment.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Engage directly with key partners to understand, design, and implement complex inferencing capabilities for state-of-the-art deep learning models.</li>
<li>Work with cutting-edge hardware and software stacks to deliver best-in-class inference performance while optimizing for cost.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with model compression (quantization, distillation, SVD, low-rank methods).</li>
<li>Experience in building high-throughput inference serving stacks (continuous batching, KV-cache optimizations, routing).</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Solid experience in GPU inference optimization (CUDA, TensorRT, Triton, or custom GPU kernels).</li>
<li>Proficiency in profiling tools (Nsight, TensorBoard, PyTorch profiler) and ability to identify CPU/GPU bottlenecks.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of USD $139,900 – $274,800 per year.</li>
<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, model compression, GPU inference optimization, TensorRT, Triton, CUDA, Nsight, TensorBoard, PyTorch profiler</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that aim to make a positive impact on society. With a strong focus on research and development, Microsoft AI is constantly pushing the boundaries of what is possible with AI.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-24/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a15b11dd-765</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will be responsible for designing and implementing complex software systems that drive innovation in AI infrastructure. You will work with cutting-edge hardware and software stacks to deliver best-in-class inference performance while optimizing for cost, leveraging open-source projects to advance deep learning applications. You will collaborate with external and internal teams to identify new areas for improvement and contribute to innovations that enhance model performance and deployment.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Engage directly with key partners to understand, design, and implement complex inferencing capabilities for state-of-the-art deep learning models, driving innovations in AI infrastructure.</li>
<li>Work with cutting-edge hardware and software stacks to deliver best-in-class inference performance while optimizing for cost, leveraging open-source projects to advance deep learning applications.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with model compression (quantization, distillation, SVD, low-rank methods).</li>
<li>Experience in building high-throughput inference serving stacks (continuous batching, KV-cache optimizations, routing).</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Solid experience in GPU inference optimization (CUDA, TensorRT, Triton, or custom GPU kernels).</li>
<li>Proficiency in profiling tools (Nsight, TensorBoard, PyTorch profiler) and ability to identify CPU/GPU bottlenecks.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, model compression, GPU inference optimization, profiling tools, TensorRT, Triton, CUDA, TensorBoard, PyTorch profiler</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that aim to make a positive impact on society. With a strong focus on research and development, Microsoft AI is constantly pushing the boundaries of what is possible with AI.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-23/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>