<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>aebe2661-136</externalid>
      <Title>Rack Product Engineer - AI Rack Infrastructure - Stargate</Title>
      <Description><![CDATA[<p>We are seeking an experienced Rack Product Engineer to own the technical development, manufacturing readiness, and lifecycle performance of rack infrastructure deployed across OpenAI&#39;s datacenters.</p>
<p>This role sits at the intersection of hardware design, manufacturing, supplier engagement, and datacenter deployment. You will work closely with compute, mechanical, power, and networking teams to define rack architectures and ensure they are manufacturable, scalable, and operationally reliable.</p>
<p>Key responsibilities include owning the technical definition and lifecycle management of rack infrastructure, driving rack architecture decisions, translating system requirements into manufacturable rack-level designs, and partnering with design and manufacturing teams to ensure rack systems are optimized for manufacturability, assembly, serviceability, and datacenter installation.</p>
<p>The ideal candidate will have 10+ years of experience in hardware product engineering, system integration, manufacturing engineering, or datacenter infrastructure engineering, and a strong analytical and problem-solving skills with experience resolving complex hardware integration issues.</p>
<p>Additional requirements include experience developing and deploying rack-level infrastructure for compute systems or datacenters, demonstrated ability to drive cross-functional technical programs from concept through manufacturing and deployment, and strong communication and collaboration skills.</p>
<p>Preferred skills include experience with hyperscale datacenter infrastructure or large-scale compute deployments, deep familiarity with rack architectures, liquid cooling, power distribution systems, and compute integration, and experience working with ODMs, JDMs, or contract manufacturers.</p>
<p>As a Rack Product Engineer at OpenAI, you will have the opportunity to work on cutting-edge projects, collaborate with a talented team, and contribute to the development of innovative AI systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205K – $335K</Salaryrange>
      <Skills>hardware product engineering, system integration, manufacturing engineering, datacenter infrastructure engineering, rack-level infrastructure, compute systems, hyperscale datacenter infrastructure, large-scale compute deployments, rack architectures, liquid cooling, power distribution systems, compute integration, ODMs, JDMs, contract manufacturers, Lean Six Sigma, process improvement training</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a company focused on developing and deploying artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4042f9ab-2df3-4745-986a-cf2063bd840b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>736969e6-3f9</externalid>
      <Title>CPU Storage Tech Lead</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Stargate team is responsible for building the physical infrastructure that powers large-scale AI systems. We design and deliver next-generation data centers optimized for dense compute clusters, advanced networking, and rapidly evolving hardware platforms.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a CPU &amp; Storage Technical Lead to define and drive the server compute and storage architecture strategy for Stargate infrastructure.</p>
<p>In this role, you will own technical direction across CPU platforms, memory configurations, local and disaggregated storage systems, and their integration into large-scale AI clusters. You will evaluate vendor roadmaps, lead platform tradeoff decisions, and ensure compute and storage systems are optimized for training, inference, and supporting services.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Own CPU and storage technical strategy for Stargate compute infrastructure across current and future generations.</li>
</ul>
<ul>
<li>Evaluate CPU platforms across performance, efficiency, memory bandwidth, PCIe topology, cost, and roadmap alignment.</li>
</ul>
<ul>
<li>Define storage architectures for AI environments, including boot media, local NVMe, shared storage, caching tiers, metadata services, and high-performance data pipelines.</li>
</ul>
<ul>
<li>Drive server platform decisions involving CPU, memory, NIC, GPU, and storage subsystem integration.</li>
</ul>
<ul>
<li>Partner with performance modeling teams to quantify tradeoffs across compute, memory, I/O, and storage bottlenecks.</li>
</ul>
<ul>
<li>Work with silicon and hardware vendors on roadmap influence, feature requests, qualification plans, and technical escalations.</li>
</ul>
<ul>
<li>Lead bring-up and validation efforts for new CPU and storage platforms in lab and production environments.</li>
</ul>
<ul>
<li>Partner with networking and cluster architecture teams to optimize end-to-end node design and data movement.</li>
</ul>
<ul>
<li>Support supply chain and sourcing teams with technical vendor assessments and second-source strategies.</li>
</ul>
<ul>
<li>Drive reliability, serviceability, and fleet lifecycle planning for compute and storage platforms.</li>
</ul>
<ul>
<li>Translate future AI workload requirements into infrastructure platform specifications.</li>
</ul>
<ul>
<li>Provide technical leadership across cross-functional stakeholders and executive reviews.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor’s degree in Computer Engineering, Electrical Engineering, Computer Science, or related technical field; advanced degree preferred.</li>
</ul>
<ul>
<li>10+ years of experience in server hardware, systems architecture, data center infrastructure, or hyperscale compute platforms.</li>
</ul>
<ul>
<li>Deep expertise in modern CPU architectures (x86, ARM, accelerator host systems) and server platform design.</li>
</ul>
<ul>
<li>Strong understanding of memory systems, PCIe/CXL fabrics, NUMA behavior, and platform-level performance constraints.</li>
</ul>
<ul>
<li>Experience with storage systems including NVMe, SSD qualification, RAID, distributed storage, object/file systems, or high-performance data pipelines.</li>
</ul>
<ul>
<li>Experience evaluating hardware tradeoffs across performance, cost, power, thermals, and supply availability.</li>
</ul>
<ul>
<li>Familiarity with GPU clusters and AI training/inference infrastructure strongly preferred.</li>
</ul>
<ul>
<li>Experience working directly with OEMs, ODMs, silicon vendors, or storage vendors.</li>
</ul>
<ul>
<li>Strong systems thinking with ability to connect component decisions to fleet-level outcomes.</li>
</ul>
<ul>
<li>Excellent communication skills with the ability to influence engineering and executive stakeholders.</li>
</ul>
<ul>
<li>Proven ability to operate in fast-moving, ambiguous environments with high ownership.</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Experience designing infrastructure for large-scale AI or HPC environments.</li>
</ul>
<ul>
<li>Familiarity with CPU vendor roadmaps across AMD, Intel, and ARM ecosystems.</li>
</ul>
<ul>
<li>Experience with distributed storage architectures supporting GPU clusters.</li>
</ul>
<ul>
<li>Knowledge of fleet operations, hardware lifecycle management, and production deployments at scale.</li>
</ul>
<ul>
<li>Prior experience in hyperscale cloud, AI infrastructure, or advanced compute environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$342K – $555K</Salaryrange>
      <Skills>server hardware, systems architecture, data center infrastructure, hyperscale compute platforms, modern CPU architectures, server platform design, memory systems, PCIe/CXL fabrics, NUMA behavior, platform-level performance constraints, storage systems, NVMe, SSD qualification, RAID, distributed storage, object/file systems, high-performance data pipelines, hardware tradeoffs, performance, cost, power, thermals, supply availability, GPU clusters, AI training/inference infrastructure, OEMs, ODMs, silicon vendors, storage vendors, strong systems thinking, component decisions, fleet-level outcomes, excellent communication skills, influence engineering and executive stakeholders, fast-moving, ambiguous environments, high ownership, infrastructure for large-scale AI or HPC environments, CPU vendor roadmaps across AMD, Intel, and ARM ecosystems, distributed storage architectures supporting GPU clusters, fleet operations, hardware lifecycle management, production deployments at scale, hyperscale cloud, AI infrastructure, advanced compute environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/18a60850-cf8b-4374-a214-ef78b9712deb</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d1e1517-7a3</externalid>
      <Title>Senior Supply Materials Manager</Title>
      <Description><![CDATA[<p>You will join the Global Supplier Management and Technical Sourcing organization, a team responsible for ensuring CoreWeave&#39;s rapidly growing hardware demand converts into clear-to-build, on-time shipments. The team works cross-functionally with Strategic Sourcing, Engineering, Program Management, Operations, and a global supplier base to support aggressive AI infrastructure deployment schedules in a highly supply-constrained environment.</p>
<p>As a Senior Supply Materials Manager, you will own end-to-end supply execution for one or more strategic OEM and ODM partners across multiple concurrent hardware programs. You will ensure allocation commitments, material readiness, and manufacturing capacity align with CoreWeave&#39;s deployment plans. This role operates at the intersection of operations, sourcing, engineering, and suppliers, requiring strong execution discipline and comfort navigating ambiguity.</p>
<p>You will act as the single-threaded owner for supply execution, proactively identifying risks, driving recovery actions, and maintaining operational stability as new NVIDIA-based platforms ramp.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we&#39;ve found compatible with our team. If some of this describes you, we&#39;d love to talk.</p>
<ul>
<li>You love driving execution in complex, supply-constrained environments</li>
<li>You&#39;re curious about how hardware, manufacturing, and supply chains come together at scale</li>
<li>You&#39;re an expert at turning demand signals into executable supply plans</li>
</ul>
<p>Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>supply chain management, materials management, manufacturing operations, data center soit, hyperscaler, cloud service provider, AI hardware, OEM and ODM supply execution, allocation-constrained environments, end-to-end supply execution, single-threaded owner, supply execution, risk identification, recovery actions, operational stability, experience working directly with Taiwan-based ODMs or global OEM manufacturing partners, familiarity with server, rack-level, or AI system builds including GPU, memory, power, and thermal components, experience supporting multiple overlapping NPI and production ramps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4655160006</Applyto>
      <Location>Taiwan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0e39aebe-3ad</externalid>
      <Title>Network Engineer - ML Infrastructure (High-Speed Interconnects)</Title>
      <Description><![CDATA[<p>We are seeking exceptional ML Infrastructure Engineers with deep expertise in high-speed interconnect technologies to design, build, and optimise the network fabric that powers large-scale AI training and inference clusters.</p>
<p>This strategic role will drive innovation in high-bandwidth, low-latency, power-efficient interconnects critical for AI/ML clusters based on advanced computing platforms. You will have the opportunity to work on all modalities of interconnects connecting GPUs and switches both inside and between data centres, including our primary front and backend networks that train Grok and that customers use for inference.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, validate, and productise high-speed copper and optical connectivity solutions for AI clusters (100k+ GPU scale).</li>
<li>Own vendor due diligence and onboarding for new 1.6T products including AEC and pluggable optical transceivers (DR4/8, FR4) including rigorous bring-up &amp; characterisation.</li>
<li>Investigate the opportunity for LPO and LRO in our network.</li>
<li>Evaluate early co-packaged and near-packaged engines for switches and GPUs.</li>
<li>Pathfinding for new interconnect modalities including VCSEL, microLED, THz radio-based solutions to improve network economics and reliability.</li>
<li>Work closely with vendors (transceiver, cable, SerDes, DSP, silicon photonics foundries) to influence roadmaps and ensure timely delivery of next-gen solutions.</li>
<li>Collaborate with ML training teams to translate workload communication patterns into concrete interconnect topology and optical reconfigurability requirements.</li>
<li>Perform system-level simulation of end-to-end fabric performance.</li>
<li>Drive failure analysis, root cause, and corrective actions for interconnect-related issues in production clusters through fleet-level metrics gathering and analysis.</li>
<li>Contribute to internal tooling and automation for interconnect health monitoring, telemetry, diagnostics, remediation and automated qualification pipelines.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>At least 8+ years of hands-on experience in designing, deploying and operating high-speed copper and optical interconnects, preferably in a module design role or in a hyperscale datacentre environment.</li>
<li>Master&#39;s or PhD degree in Electrical Engineering, Photonics or Physics.</li>
<li>Deep knowledge of PAM4 SerDes performance, equalisation, jitter, crosstalk.</li>
<li>Solid operational understanding of FEC, Retimers, TIAs and Drivers.</li>
<li>Deep knowledge of optical link budget analysis and performance metrics including TDECQ, OMA, Tcode, stressed receiver sensitivity and associated diagnostics.</li>
<li>Expertise in transceiver components including CW lasers, SiPh PICs, EML, DSP, passive subassemblies, their failure modes and characterisation.</li>
<li>Knowledge of thermal, mechanical, power, signal integrity constraints in dense hardware.</li>
<li>Knowledge of SiPh design process, yield improvement and reliability testing.</li>
<li>Familiarity with CPO technologies and challenges/risk areas.</li>
<li>Familiarity with subcomponent supply chains and global manufacturers, ODMs and CMs.</li>
<li>Strong problem-solving skills and ability to thrive in a fast-paced, ambiguous setting.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>high-speed copper and optical interconnects, PAM4 SerDes performance, equalisation, jitter, crosstalk, FEC, Retimers, TIAs, Drivers, optical link budget analysis, performance metrics, TDECQ, OMA, Tcode, stressed receiver sensitivity, associated diagnostics, CW lasers, SiPh PICs, EML, DSP, passive subassemblies, thermal, mechanical, power, signal integrity constraints, SiPh design process, yield improvement, reliability testing, CPO technologies, subcomponent supply chains, global manufacturers, ODMs, CMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The company operates with a flat organisational structure.</Employerdescription>
      <Employerwebsite>https://www.x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5043570007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>