<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>51b57192-d10</externalid>
      <Title>Member of Technical Staff, Capacity &amp; Efficiency Infrastructure - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p>Microsoft AI is looking for a Member of Technical Staff – Capacity &amp; Efficiency Infrastructure to help us improve manage, and improve the efficiency of, our compute fleet. We&#39;re seeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective. The ideal candidate enjoys building world-class consumer experiences and products in a fast-paced environment. You will actively contribute to the development of AI models powering our innovative products. Expect to wear multiple hats and work across engineering, research, and everything in between.</p>
<p>Your contributions will span model architecture, data curation, training and inference infrastructure, evaluation protocols, alignment and reinforcement learning from human feedback (RLHF), and many other exciting topics at the cutting edge of AI. Microsoft AI is building the training infrastructure that powers frontier-scale models and advances research toward humanist superintelligence. As a Member of Technical Staff – Capacity &amp; Efficiency, you will contribute to a fast-moving codebase that enables training at an unprecedented scale. This role will require building software and mathematical models for measuring the effectiveness of our capacity usage and then developing tools and techniques to help us improve. This will require you to partner with ML researchers to scale up the latest research recipes, implement new forms of distributed training parallelism, and ensure the reliability and performance of thousands of GPUs across our supercomputing fleet. Profiling, benchmarking, debugging, and fine-grained optimization are core to this role, demanding both engineering rigor and creativity.</p>
<p>Microsoft Superintelligence Team:</p>
<p>The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence,ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society,advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.</li>
<li>Build and evolve telemetry systems to provide visibility into infrastructure &amp; ML model performance, utilization, and cost related metrics.</li>
<li>Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems.</li>
<li>Drive architectural improvements across various ML services which deliver measurable efficiency improvements.</li>
<li>Build and evolve tools to automatically provide insights and recommendations to improve fleet-wide efficiency.</li>
<li>Optimize collective communication libraries (e.g., NCCL) for emerging NVLink and InfiniBand topologies.</li>
<li>Partner with ML researchers and infrastructure engineers to understand their plans and future needs and develop plans to balance growth with efficiency.</li>
<li>Collaborate with hardware teams to optimize for next-generation accelerators (NVIDIA, MAIA, and beyond).</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Deep understanding of the fundamentals of GPU architectures and DL/LLM architectures.</li>
<li>Deep experience in profiling and analyzing performance in large-scale distributed computing systems.</li>
<li>Experience with low-level GPU programming (CUDA, Triton, NCCL) and frameworks such as PyTorch or JAX.</li>
<li>Experience in leading technical projects and supporting architectural decisions with data.</li>
<li>Experience building infrastructure for large-scale machine learning or generative AI workloads.</li>
<li>Experience in networking (InfiniBand, NVLink), storage systems, or distributed training parallelisms.</li>
<li>Track record of contributing to high-performance computing or large-scale AI infrastructure projects.</li>
</ul>
<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, Python, GPU architectures, DL/LLM architectures, low-level GPU programming, PyTorch, JAX, networking, storage systems, distributed training parallelisms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets software products and services. It is one of the largest and most successful companies in the world.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-capacity-efficiency-infrastructure-mai-superintelligence-team-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>540d480d-7d6</externalid>
      <Title>Member of Technical Staff, Capacity &amp; Efficiency Infrastructure - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p>Microsoft AI is looking for a Member of Technical Staff – Capacity &amp; Efficiency Infrastructure to help us improve manage, and improve the efficiency of, our compute fleet. We&#39;re seeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective. The ideal candidate enjoys building world-class consumer experiences and products in a fast-paced environment. You will actively contribute to the development of AI models powering our innovative products. Expect to wear multiple hats and work across engineering, research, and everything in between. Your contributions will span model architecture, data curation, training and inference infrastructure, evaluation protocols, alignment and reinforcement learning from human feedback (RLHF), and many other exciting topics at the cutting edge of AI. Microsoft AI is building the training infrastructure that powers frontier-scale models and advances research toward humanist superintelligence. As a Member of Technical Staff – Capacity &amp; Efficiency, you will contribute to a fast-moving codebase that enables training at an unprecedented scale. This role will require building software and mathematical models for measuring the effectiveness of our capacity usage and then developing tools and techniques to help us improve. This will require you to partner with ML researchers to scale up the latest research recipes, implement new forms of distributed training parallelism, and ensure the reliability and performance of thousands of GPUs across our supercomputing fleet. Profiling, benchmarking, debugging, and fine-grained optimization are core to this role, demanding both engineering rigor and creativity.</p>
<p>Responsibilities: Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters. Build and evolve telemetry systems to provide visibility into infrastructure &amp; ML model performance, utilization, and cost related metrics. Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems. Drive architectural improvements across various ML services which deliver measurable efficiency improvements. Build and evolve tools to automatically provide insights and recommendations to improve fleet-wide efficiency. Optimize collective communication libraries (e.g., NCCL) for emerging NVLink and InfiniBand topologies. Partner with ML researchers and infrastructure engineers to understand their plans and future needs and develop plans to balance growth with efficiency. Collaborate with hardware teams to optimize for next-generation accelerators (NVIDIA, MAIA, and beyond). Embody our Culture and Values.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python OR Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python OR equivalent experience. Deep understanding of the fundamentals of GPU architectures and DL/LLM architectures. Deep experience in profiling and analyzing performance in large-scale distributed computing systems. Deep experience in profiling and analyzing performance in ML models especially GenAI models. Experience with low-level GPU programming (CUDA, Triton, NCCL) and frameworks such as PyTorch or JAX. Experience in leading technical projects and supporting architectural decisions with data. Experience building infrastructure for large-scale machine learning or generative AI workloads. Experience in networking (InfiniBand, NVLink), storage systems, or distributed training parallelisms. Track record of contributing to high-performance computing or large-scale AI infrastructure projects.</p>
<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$119,800 – $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, GPU architectures, DL/LLM architectures, low-level GPU programming, PyTorch, JAX, networking, storage systems, distributed training parallelisms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets software products and services. It is one of the largest and most influential technology companies in the world.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-capacity-efficiency-infrastructure-mai-superintelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
  </jobs>
</source>