{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/compilers"},"x-facet":{"type":"skill","slug":"compilers","display":"Compilers","count":10},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1df66b08-463"},"title":"Technical Program Manager, Inference Performance","description":"<p>As a Technical Program Manager for Inference, you&#39;ll be the critical bridge between our inference systems and the broader organisation. You&#39;ll drive strategic initiatives across inference runtime and accelerator performance,coordinating model launches, managing cross-platform dependencies, and ensuring reliability across multiple hardware targets.</p>\n<p>This role is essential for keeping our most contended infrastructure teams shipping effectively while Research, Product, and Safety all depend on their output.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Systems Integration &amp; Coordination: Lead cross-functional initiatives for new infrastructure integration, establishing clear ownership, timelines, and communication channels between teams. Drive end-to-end planning for major infrastructure transitions including platform modernization and new tech adoption.</li>\n</ul>\n<ul>\n<li>Performance &amp; Efficiency: Partner with engineering teams to identify optimisation opportunities, track performance metrics, and prioritise work that unlocks capacity gains. Coordinate across runtime and accelerator layers to ensure efficiency wins ship without compromising reliability.</li>\n</ul>\n<ul>\n<li>Launch Coordination: Drive end-to-end readiness for model and feature launches across multiple hardware platforms. Establish processes for cross-platform validation, manage launch timelines, and ensure smooth handoffs between runtime, accelerator, and downstream teams.</li>\n</ul>\n<ul>\n<li>Strategic Planning: Own and prioritise the inference deployment roadmap, working closely with engineering leadership to prioritise initiatives and manage dependencies. Provide visibility into upcoming changes and their organisational impact.</li>\n</ul>\n<ul>\n<li>Stakeholder Communication: Build strong relationships across research, engineering, and product teams to understand requirements and constraints. Translate technical complexities into clear updates for leadership and ensure alignment on priorities and timelines.</li>\n</ul>\n<ul>\n<li>Process Improvement: Identify inefficiencies in current workflows and drive systematic improvements. Establish metrics and dashboards to track infrastructure health, capacity utilisation, and deployment success rates.</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have several years of experience in technical program management, with proven success delivering complex infrastructure programs, preferably in ML/AI systems or large-scale distributed systems</li>\n</ul>\n<ul>\n<li>Have deep technical understanding of inference systems, compilers, or hardware accelerators to engage substantively with engineers and identify technical risks.</li>\n</ul>\n<ul>\n<li>Excel at creating structure and processes in ambiguous environments, bringing clarity to complex cross-team initiatives</li>\n</ul>\n<ul>\n<li>Have strong stakeholder management skills and can build trust with both technical and non-technical partners</li>\n</ul>\n<ul>\n<li>Are comfortable navigating competing priorities and using data to drive technical decisions</li>\n</ul>\n<ul>\n<li>Have experience with infrastructure scaling initiatives, hardware integrations, or deployment governance</li>\n</ul>\n<ul>\n<li>Thrive in fast-paced environments and can balance strategic planning with tactical execution</li>\n</ul>\n<ul>\n<li>Are passionate about AI infrastructure and understand the unique challenges of deploying and scaling large language models</li>\n</ul>\n<p>Deadline to apply: None, applications will be received on a rolling basis.</p>\n<p>The annual compensation range for this role is $290,000-$365,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1df66b08-463","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5107763008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$290,000-$365,000 USD","x-skills-required":["Technical Program Management","Inference Systems","Compilers","Hardware Accelerators","Cross-Functional Initiatives","Infrastructure Integration","Platform Modernization","New Tech Adoption","Performance Metrics","Capacity Gains","Runtime and Accelerator Layers","Efficiency Wins","Reliability","Model and Feature Launches","Cross-Platform Validation","Launch Timelines","Smooth Handoffs","Inference Deployment Roadmap","Engineering Leadership","Prioritisation Initiatives","Dependencies","Upcoming Changes","Organisational Impact","Stakeholder Communication","Requirements and Constraints","Technical Complexities","Leadership Updates","Priorities and Timelines","Process Improvement","Metrics and Dashboards","Infrastructure Health","Capacity Utilisation","Deployment Success Rates"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:27.143Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Program Management, Inference Systems, Compilers, Hardware Accelerators, Cross-Functional Initiatives, Infrastructure Integration, Platform Modernization, New Tech Adoption, Performance Metrics, Capacity Gains, Runtime and Accelerator Layers, Efficiency Wins, Reliability, Model and Feature Launches, Cross-Platform Validation, Launch Timelines, Smooth Handoffs, Inference Deployment Roadmap, Engineering Leadership, Prioritisation Initiatives, Dependencies, Upcoming Changes, Organisational Impact, Stakeholder Communication, Requirements and Constraints, Technical Complexities, Leadership Updates, Priorities and Timelines, Process Improvement, Metrics and Dashboards, Infrastructure Health, Capacity Utilisation, Deployment Success Rates","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":290000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5e0f8d18-009"},"title":"Software Engineer","description":"<p>Synopsys software engineers are key enablers in the world of Electronic Design Automation (EDA), developing and maintaining software used in chip design, verification and manufacturing.</p>\n<p>They work on assignments like designing, developing, and troubleshooting software, leveraging the state-of-the-art technologies like AI/ML, GenAI and Cloud. Their critical contributions enable world-wide EDA designers to extend the frontiers of semiconductors and chip development.</p>\n<p>You are an accomplished software engineer with a passion for innovation and a drive to solve challenging problems. With a minimum of 5 years of experience in software development, you possess deep expertise in C/C++ and a thorough understanding of data structures and algorithms. You’re familiar with operating systems, compilers, networks, and internet-related tools, and have hands-on experience in developing complex software projects. Your background includes working with Hierarchical DFT, 1687, Pattern Porting, and CAD tool development, and you are keen to expand your knowledge in these areas. You approach issues with creativity, exercise independent judgment, and thrive in collaborative environments where you can guide and mentor junior peers. You embody a growth mindset, always eager to learn new technologies and refine your skills. Your analytical acumen and problem-solving abilities enable you to navigate ambiguity and deliver robust solutions. You value diversity, inclusivity, and teamwork, believing that the best results are achieved through open communication and shared expertise. You are proactive, adaptable, and committed to excellence, ready to make a significant impact within a forward-thinking organisation.</p>\n<p>Designing, developing, troubleshooting, and debugging sophisticated software programs and tools. \nBuilding and enhancing software solutions including operating systems, compilers, routers, networking utilities, databases, and internet-related applications. \nDetermining hardware compatibility and influencing hardware design to optimise software performance. \nCreating robust algorithms and data structures to solve complex engineering problems. \nCollaborating with cross-functional teams to execute projects from conception to completion. \nGuiding and mentoring junior team members, sharing expertise and fostering a collaborative learning environment. \nAnalysing, diagnosing, and resolving technical issues with creative and effective solutions.</p>\n<p>Driving innovation in chip design and verification through advanced software development. \nEnhancing the performance, efficiency, and reliability of Synopsys tools and solutions. \nContributing to the successful delivery of high-performance silicon chips for global customers. \nEmpowering teams with technical leadership, knowledge sharing, and mentorship. \nInfluencing hardware and software integration for optimal system outcomes. \nHelping Synopsys maintain its leadership in the semiconductor and software industry through continuous improvement and creative problem-solving.</p>\n<p>Minimum of 5 years’ experience in software development, preferably in high-tech or semiconductor domains. \nExpertise in C/C++ programming and a strong foundation in data structures and algorithms. \nHands-on experience with operating systems, compilers, networks, and internet-related tools. \nPrior knowledge and experience in Hierarchical DFT, 1687, Pattern Porting, and CAD tool development. \nAbility to resolve complex technical issues independently and creatively. \nFamiliarity with hardware-software integration and performance optimisation.</p>\n<p>Analytical thinker with exceptional problem-solving skills. \nEffective communicator and collaborative team player. \nProactive learner with a growth mindset and adaptability to new technologies. \nMentor and guide for junior engineers, fostering inclusion and knowledge sharing. \nIndependent decision-maker who thrives in dynamic, fast-paced environments.</p>\n<p>You’ll join a talented and diverse engineering team at Synopsys, focused on driving the next generation of software tools for chip design and verification. The team collaborates closely across disciplines, blending expertise in software and hardware to deliver industry-leading solutions. You’ll work alongside passionate engineers dedicated to continuous learning, innovation, and excellence.</p>\n<p>We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>\n<p>At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, colour, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5e0f8d18-009","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/noida/software-engineer/44408/93486284480","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C/C++","Data Structures and Algorithms","Operating Systems","Compilers","Networks","Internet-related Tools","Hierarchical DFT","1687","Pattern Porting","CAD Tool Development"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:16:17.969Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Noida"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, Data Structures and Algorithms, Operating Systems, Compilers, Networks, Internet-related Tools, Hierarchical DFT, 1687, Pattern Porting, CAD Tool Development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64bc4afc-578"},"title":"R&D Engineering, Sr Engineer - DFT","description":"<p><strong>Overview</strong></p>\n<p>Synopsys software engineers are key enablers in the world of Electronic Design Automation (EDA), developing and maintaining software used in chip design, verification, and manufacturing. They work on assignments like designing, developing, and troubleshooting software, leveraging the state-of-the-art technologies like AI/ML, GenAI, and Cloud. Their critical contributions enable world-wide EDA designers to extend the frontiers of semiconductors and chip development.</p>\n<p><strong>Job Description</strong></p>\n<p><strong>Category</strong></p>\n<p>Engineering</p>\n<p><strong>Hire Type</strong></p>\n<p>Employee</p>\n<p><strong>Job ID</strong></p>\n<p>15368</p>\n<p><strong>Remote Eligible</strong></p>\n<p>No</p>\n<p><strong>Date Posted</strong></p>\n<p>02/22/2026</p>\n<p><strong>We Are:</strong></p>\n<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation.</p>\n<p><strong>You Are:</strong></p>\n<p>You are an accomplished software engineer with a passion for innovation and a drive to solve challenging problems. With a minimum of 3 years of experience in software development, you possess deep expertise in C/C++ and a thorough understanding of data structures and algorithms. You’re familiar with operating systems, compilers, networks, and internet-related tools, and have hands-on experience in developing complex software projects. Your background includes working with Hierarchical DFT, 1687, Pattern Porting, and CAD tool development, and you are keen to expand your knowledge in these areas. You approach issues with creativity, exercise independent judgment, and thrive in collaborative environments where you can guide and mentor junior peers. You embody a growth mindset, always eager to learn new technologies and refine your skills. Your analytical acumen and problem-solving abilities enable you to navigate ambiguity and deliver robust solutions. You value diversity, inclusivity, and teamwork, believing that the best results are achieved through open communication and shared expertise. You are proactive, adaptable, and committed to excellence, ready to make a significant impact within a forward-thinking organization.</p>\n<p><strong>What You’ll Be Doing:</strong></p>\n<ul>\n<li>Designing, developing, troubleshooting, and debugging sophisticated software programs and tools.</li>\n</ul>\n<ul>\n<li>Building and enhancing software solutions including operating systems, compilers, routers, networking utilities, databases, and internet-related applications.</li>\n</ul>\n<ul>\n<li>Determining hardware compatibility and influencing hardware design to optimize software performance.</li>\n</ul>\n<ul>\n<li>Creating robust algorithms and data structures to solve complex engineering problems.</li>\n</ul>\n<ul>\n<li>Collaborating with cross-functional teams to execute projects from conception to completion.</li>\n</ul>\n<ul>\n<li>Guiding and mentoring junior team members, sharing expertise and fostering a collaborative learning environment.</li>\n</ul>\n<ul>\n<li>Analyzing, diagnosing, and resolving technical issues with creative and effective solutions.</li>\n</ul>\n<p><strong>The Impact You Will Have:</strong></p>\n<ul>\n<li>Driving innovation in chip design and verification through advanced software development.</li>\n</ul>\n<ul>\n<li>Enhancing the performance, efficiency, and reliability of Synopsys tools and solutions.</li>\n</ul>\n<ul>\n<li>Contributing to the successful delivery of high-performance silicon chips for global customers.</li>\n</ul>\n<ul>\n<li>Empowering teams with technical leadership, knowledge sharing, and mentorship.</li>\n</ul>\n<ul>\n<li>Influencing hardware and software integration for optimal system outcomes.</li>\n</ul>\n<ul>\n<li>Helping Synopsys maintain its leadership in the semiconductor and software industry through continuous improvement and creative problem-solving.</li>\n</ul>\n<p><strong>What You’ll Need:</strong></p>\n<ul>\n<li>Minimum of 5 years’ experience in software development, preferably in high-tech or semiconductor domains.</li>\n</ul>\n<ul>\n<li>Expertise in C/C++ programming and a strong foundation in data structures and algorithms.</li>\n</ul>\n<ul>\n<li>Hands-on experience with operating systems, compilers, networks, and internet-related tools.</li>\n</ul>\n<ul>\n<li>Prior knowledge and experience in Hierarchical DFT, 1687, Pattern Porting, and CAD tool development.</li>\n</ul>\n<ul>\n<li>Ability to resolve complex technical issues independently and creatively.</li>\n</ul>\n<ul>\n<li>Familiarity with hardware-software integration and performance optimization.</li>\n</ul>\n<p><strong>Who You Are:</strong></p>\n<ul>\n<li>Analytical thinker with exceptional problem-solving skills.</li>\n</ul>\n<ul>\n<li>Effective communicator and collaborative team player.</li>\n</ul>\n<ul>\n<li>Proactive learner with a growth mindset and adaptability to new technologies.</li>\n</ul>\n<ul>\n<li>Mentor and guide for junior engineers, fostering inclusion and knowledge sharing.</li>\n</ul>\n<ul>\n<li>Independent decision-maker who thrives in dynamic, fast-paced environments.</li>\n</ul>\n<p><strong>The Team You’ll Be A Part Of:</strong></p>\n<p>You’ll join a talented and diverse engineering team at Synopsys, focused on driving the next generation of software tools for chip design and verification. The team collaborates closely across disciplines, blending expertise in software and hardware to deliver industry-leading solutions. You’ll work alongside passionate engineers dedicated to continuous learning, innovation, and excellence.</p>\n<p><strong>Rewards and Benefits:</strong></p>\n<p>We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>\n<p>At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.</p>\n<p><strong>Benefits</strong></p>\n<p>At Synopsys, innovation is driven by our incredible team around the world. We feel honored to work alongside such talented and passionate individuals who choose to make a difference here every day. We&#39;re proud to provide the comprehensive benefits and rewards that our team truly deserves.</p>\n<p>Visit Benefits Page</p>\n<ul>\n<li>### Health &amp; Wellness</li>\n</ul>\n<p>Comprehensive medical and healthcare plans that work for you and your family.</p>\n<ul>\n<li>### Time Away</li>\n</ul>\n<p>In addition to company holidays, we have ETO and FTO Programs.</p>\n<ul>\n<li>### Family Support</li>\n</ul>\n<p>Maternity and paternity leave, parenting resources, adoption and surrogacy assistance, and more.</p>\n<ul>\n<li>### ESPP</li>\n</ul>\n<p>Purchase Synopsys common stock at a 15% discount, with a 24 month look-back.</p>\n<ul>\n<li>### Retirement Plans</li>\n</ul>\n<p>Save for your future with our retirement plans that vary by region and country.</p>\n<ul>\n<li>### Compensation</li>\n</ul>\n<p>Competitive salaries.</p>\n<p>\\<em>\\</em> Benefits vary by country and region - check with your recruiter to confirm</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64bc4afc-578","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/bengaluru/r-and-d-engineering-sr-engineer-dft/44408/92070292128","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C/C++","data structures and algorithms","operating systems","compilers","networks","internet-related tools","Hierarchical DFT","1687","Pattern Porting","CAD tool development"],"x-skills-preferred":[],"datePosted":"2026-03-09T11:07:19.335Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, data structures and algorithms, operating systems, compilers, networks, internet-related tools, Hierarchical DFT, 1687, Pattern Porting, CAD tool development"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a51375e8-30e"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost. Our work spans today&#39;s frontier AI workloads and directly shapes the next generation of accelerators, system architectures, and large-scale AI platforms. We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures. This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale. In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>About the Team</p>\n<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>\n<p>Microsoft Superintelligence Team</p>\n<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in—come and join us as we work on our next generation of models!</p>\n<p>Responsibilities</p>\n<p>Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks. Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements. Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems. Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps. Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations. Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs. Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams. Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a51375e8-30e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["Experience designing or operating large-scale AI clusters for training or inference","Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications","Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Background in performance modeling and capacity planning for future hardware generations","Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:18:41.443Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, Experience designing or operating large-scale AI clusters for training or inference, Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications, Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Background in performance modeling and capacity planning for future hardware generations, Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd1a0d16-311"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost.</p>\n<p>We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures.</p>\n<p>This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale.</p>\n<p>In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>Microsoft Superintelligence Team\nMicrosoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>\n<p>Responsibilities\nLead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks.</p>\n<p>Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements.</p>\n<p>Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems.</p>\n<p>Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps.</p>\n<p>Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations.</p>\n<p>Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs.</p>\n<p>Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams.</p>\n<p>Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p>Qualifications\nBachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Additional or Preferred Qualifications\nMaster’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Strong background in one or more of the following areas: AI accelerator or GPU architectures Distributed systems and large-scale AI training/inference High-performance computing (HPC) and collective communications ML systems, runtimes, or compilers Performance modeling, benchmarking, and systems analysis Hardware–software co-design for AI workloads Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development.</p>\n<p>Proven ability to work across organizational boundaries and influence technical decisions involving multiple stakeholders. Experience designing or operating large-scale AI clusters for training or inference. Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications. Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand). Background in performance modeling and capacity planning for future hardware generations. Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews. Publications, patents, or open-source contributions in systems, architecture, or ML systems are a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd1a0d16-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["LLMs, multimodal models, or recommendation systems, and their systems-level implications","Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Performance modeling and capacity planning for future hardware generations","Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:13:30.666Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, LLMs, multimodal models, or recommendation systems, and their systems-level implications, Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Performance modeling and capacity planning for future hardware generations, Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_459d7a0d-23e"},"title":"Technical Program Manager, Inference Performance","description":"<p>As a Technical Program Manager for Inference, you&#39;ll be the critical bridge between our inference systems and the broader organisation. You&#39;ll drive strategic initiatives across inference runtime and accelerator performance—coordinating model launches, managing cross-platform dependencies, and ensuring reliability across multiple hardware targets.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li><strong>Systems Integration &amp; Coordination</strong>: Lead cross-functional initiatives for new infrastructure integration, establishing clear ownership, timelines, and communication channels between teams. Drive end-to-end planning for major infrastructure transitions including platform modernization and new tech adoption.</li>\n<li><strong>Performance &amp; Efficiency:</strong> Partner with engineering teams to identify optimisation opportunities, track performance metrics, and prioritise work that unlocks capacity gains. Coordinate across runtime and accelerator layers to ensure efficiency wins ship without compromising reliability.</li>\n<li><strong>Launch Coordination:</strong> Drive end-to-end readiness for model and feature launches across multiple hardware platforms. Establish processes for cross-platform validation, manage launch timelines, and ensure smooth handoffs between runtime, accelerator, and downstream teams.</li>\n<li><strong>Strategic Planning:</strong> Own and prioritise the inference deployment roadmap, working closely with engineering leadership to prioritise initiatives and manage dependencies. Provide visibility into upcoming changes and their organisational impact.</li>\n<li><strong>Stakeholder Communication:</strong> Build strong relationships across research, engineering, and product teams to understand requirements and constraints. Translate technical complexities into clear updates for leadership and ensure alignment on priorities and timelines.</li>\n<li><strong>Process Improvement:</strong> Identify inefficiencies in current workflows and drive systematic improvements. Establish metrics and dashboards to track infrastructure health, capacity utilisation, and deployment success rates.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have several years of experience in technical program management, with proven success delivering complex infrastructure programs, preferably in ML/AI systems or large-scale distributed systems</li>\n<li>Have deep technical understanding of inference systems, compilers, or hardware accelerators to engage substantively with engineers and identify technical risks.</li>\n<li>Excel at creating structure and processes in ambiguous environments, bringing clarity to complex cross-team initiatives</li>\n<li>Have strong stakeholder management skills and can build trust with both technical and non-technical partners</li>\n<li>Are comfortable navigating competing priorities and using data to drive technical decisions</li>\n<li>Have experience with infrastructure scaling initiatives, hardware integrations, or deployment governance</li>\n<li>Thrive in fast-paced environments and can balance strategic planning with tactical execution</li>\n<li>Are passionate about AI infrastructure and understand the unique challenges of deploying and scaling large language models</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_459d7a0d-23e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5107763008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$290,000 - $365,000USD","x-skills-required":["technical program management","inference systems","compilers","hardware accelerators","cross-functional initiatives","model launches","cross-platform dependencies","reliability","performance metrics","capacity gains","efficiency wins","runtime","accelerator layers","launch timelines","smooth handoffs","strategic planning","inference deployment roadmap","engineering leadership","prioritisation","dependencies","visibility","upcoming changes","organisational impact","stakeholder communication","requirements","constraints","technical complexities","clear updates","leadership","alignment","priorities","timelines","process improvement","inefficiencies","workflows","systematic improvements","metrics","dashboards","infrastructure health","capacity utilisation","deployment success rates"],"x-skills-preferred":["infrastructure scaling initiatives","hardware integrations","deployment governance","fast-paced environments","strategic planning","tactical execution","AI infrastructure","large language models"],"datePosted":"2026-03-08T13:48:59.030Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"technical program management, inference systems, compilers, hardware accelerators, cross-functional initiatives, model launches, cross-platform dependencies, reliability, performance metrics, capacity gains, efficiency wins, runtime, accelerator layers, launch timelines, smooth handoffs, strategic planning, inference deployment roadmap, engineering leadership, prioritisation, dependencies, visibility, upcoming changes, organisational impact, stakeholder communication, requirements, constraints, technical complexities, clear updates, leadership, alignment, priorities, timelines, process improvement, inefficiencies, workflows, systematic improvements, metrics, dashboards, infrastructure health, capacity utilisation, deployment success rates, infrastructure scaling initiatives, hardware integrations, deployment governance, fast-paced environments, strategic planning, tactical execution, AI infrastructure, large language models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":290000,"maxValue":365000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11a60d5a-f54"},"title":"Performance Engineer, GPU","description":"<p><strong>About the role:</strong></p>\n<p>Pioneering the next generation of AI requires breakthrough innovations in GPU performance and systems engineering. As a GPU Performance Engineer, you&#39;ll architect and implement the foundational systems that power Claude and push the frontiers of what&#39;s possible with large language models. You&#39;ll be responsible for maximizing GPU utilization and performance at unprecedented scale, developing cutting-edge optimizations that directly enable new model capabilities and dramatically improve inference efficiency.</p>\n<p>Working at the intersection of hardware and software, you&#39;ll implement state-of-the-art techniques from custom kernel development to distributed system architectures. Your work will span the entire stack—from low-level tensor core optimizations to orchestrating thousands of GPUs in perfect synchronization.</p>\n<p>Strong candidates will have a track record of delivering transformative GPU performance improvements in production ML systems and will be excited to shape the future of AI infrastructure alongside world-class researchers and engineers.</p>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have deep experience with GPU programming and optimization at scale</li>\n<li>Are impact-driven, passionate about delivering measurable performance breakthroughs</li>\n<li>Can navigate complex systems from hardware interfaces to high-level ML frameworks</li>\n<li>Enjoy collaborative problem-solving and pair programming</li>\n<li>Want to work on state-of-the-art language models with real-world impact</li>\n<li>Care about the societal impacts of your work</li>\n<li>Thrive in ambiguous environments where you define the path forward</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU Kernel Development: CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization</li>\n<li>ML Compilers &amp; Frameworks: PyTorch/JAX internals, torch.compile, XLA, custom operators</li>\n<li>Performance Engineering: Kernel fusion, memory bandwidth optimization, profiling with Nsight</li>\n<li>Distributed Systems: NCCL, NVLink, collective communication, model parallelism</li>\n<li>Low-Precision: INT8/FP8 quantization, mixed-precision techniques</li>\n<li>Production Systems: Large-scale training infrastructure, fault tolerance, cluster orchestration</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Co-design attention mechanisms and algorithms for next-generation hardware architectures</li>\n<li>Develop custom kernels for emerging quantization formats and mixed-precision techniques</li>\n<li>Design distributed communication strategies for multi-node GPU clusters</li>\n<li>Optimize end-to-end training and inference pipelines for frontier language models</li>\n<li>Build performance modeling frameworks to predict and optimize GPU utilization</li>\n<li>Implement kernel fusion strategies to minimize memory bandwidth bottlenecks</li>\n<li>Create resilient systems for planet-scale distributed training infrastructure</li>\n<li>Profile and eliminate performance bottlenecks in production serving infrastructure</li>\n<li>Partner with hardware vendors to influence future accelerator capabilities and software stacks</li>\n</ul>\n<p><strong>Deadline to apply:</strong> None. Applications will be reviewed on a rolling basis.</p>\n<p>The expected salary range for this position is:</p>\n<p>Annual Salary: $280,000 - $850,000USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11a60d5a-f54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4926227008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000 - $850,000USD","x-skills-required":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"x-skills-preferred":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"datePosted":"2026-03-08T13:45:05.412Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration, GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7badeaf5-492"},"title":"Hardware / Software CoDesign Engineer","description":"<p><strong>Hardware / Software CoDesign Engineer</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$342K – $555K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Engineer on our hardware optimization and co-design team, you will co-design future hardware from different vendors for programmability and performance. You will work with our kernel, compiler and machine learning engineers to understand their unique needs related to ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. You will evangelize these constraints with various vendors to develop and influence future hardware architectures towards efficient training and inference on our models. If you are excited about efficiently distributing a large language model across devices, dealing with and optimizing system-wide/rack-wide networking bottlenecks and eventually tailoring the compute pipe and memory hierarchy of the hardware platform, simulating workloads at different abstractions and working closely with our partners, this is the perfect opportunity!</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Co-design future hardware for programmability and performance with our hardware vendors</li>\n</ul>\n<ul>\n<li>Assist hardware vendors in developing optimal kernels and add support for it in our compiler</li>\n</ul>\n<ul>\n<li>Develop performance estimates for critical kernels for different hardware configurations and drive decisions on compute core and memory hierarchy features</li>\n</ul>\n<ul>\n<li>Build system performance models at different abstraction levels and carry out analysis to drive decisions on scale up, scale out, front end networking</li>\n</ul>\n<ul>\n<li>Work with machine learning engineers, kernel engineers and compiler developers to understand their vision and needs from high performance accelerators</li>\n</ul>\n<ul>\n<li>Manage communication and coordination with internal and external partners</li>\n</ul>\n<ul>\n<li>Influence the roadmap of hardware partners to optimize them for OpenAI’s workloads.</li>\n</ul>\n<ul>\n<li>Evaluate potential partners’ accelerators and platforms.</li>\n</ul>\n<ul>\n<li>As the scope of the role and team grows, understand and influence roadmaps for hardware partners for our datacenter networks, racks, and buildings.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>4+ years of industry experience, including experience harnessing compute at scale and optimizing ML platform code to run efficiently on target hardware.</li>\n</ul>\n<ul>\n<li>Strong experience in software/hardware co-design</li>\n</ul>\n<ul>\n<li>Deep understanding of GPU and/or other AI accelerators</li>\n</ul>\n<ul>\n<li>Experience with CUDA, Triton or a related accelerator programming language</li>\n</ul>\n<ul>\n<li>Experience driving Machine Learning accuracy with low precision formats</li>\n</ul>\n<ul>\n<li>Experience with system performance modeling and analysis to optimize ML model deployment</li>\n</ul>\n<ul>\n<li>Strong coding skills in C/C++ and Python</li>\n</ul>\n<ul>\n<li>Are familiar with the fundamentals of deep learning computing and chip architecture/microarchitecture.</li>\n</ul>\n<p><strong>These attributes are nice to have:</strong></p>\n<ul>\n<li>PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems</li>\n</ul>\n<ul>\n<li>Strong understanding of LLMs and challenges related to their training and inference</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7badeaf5-492","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/bdbb2292-ecb3-42dc-ba89-65edf397d8f8","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$342K – $555K • Offers Equity","x-skills-required":["software/hardware co-design","GPU and/or other AI accelerators","CUDA, Triton or a related accelerator programming language","Machine Learning accuracy with low precision formats","system performance modeling and analysis to optimize ML model deployment","C/C++ and Python"],"x-skills-preferred":["PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems","Strong understanding of LLMs and challenges related to their training and inference"],"datePosted":"2026-03-06T18:39:51.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software/hardware co-design, GPU and/or other AI accelerators, CUDA, Triton or a related accelerator programming language, Machine Learning accuracy with low precision formats, system performance modeling and analysis to optimize ML model deployment, C/C++ and Python, PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems, Strong understanding of LLMs and challenges related to their training and inference","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":342000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_651a81aa-1bc"},"title":"Software Engineer Systems Research Internship","description":"<p><strong>Software Engineer Systems Research Internship, Applied Emerging Talent (Summer 2026)</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Intern</p>\n<p><strong>Location Type</strong></p>\n<p>On-site</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Deadline to Apply</strong></p>\n<p>March 11, 2026 at 3:00 AM EDT</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$67 per hour</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to the world. We seek to learn from deployment and broadly distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. We aim to make our innovative tools globally accessible, transcending geographic, economic, or platform barriers. Our commitment is to facilitate the use of AI to enhance lives, fostered by rigorous insights into how people use our products.</p>\n<p><strong>About the Role</strong></p>\n<p>A systems research internship is for people who love the real-world intersection of systems-engineering and research: you’ll investigate a hard systems problem, build something meaningful, and measure it carefully. The goal is practical impact—making Applied Systems better: more efficient, more scalable, and more reliable.</p>\n<p>OpenAI is currently recruiting for candidates interested in a 13-week, paid, in-person internship based in our San Francisco office during Summer 2026. In some cases, it may be extended for an additional 13 weeks (for a total of up to 26 weeks), based on team needs, candidate interest, and performance.</p>\n<p><strong>In this role, you will typically focus on improving real systems in areas like:</strong></p>\n<ul>\n<li>Distributed systems &amp; storage (throughput, latency, consistency, durability)</li>\n</ul>\n<ul>\n<li>Compute &amp; scheduling (GPU/accelerator utilization, job orchestration, queuing)</li>\n</ul>\n<ul>\n<li>Performance engineering (profiling, bottlenecks, scalability, capacity planning)</li>\n</ul>\n<ul>\n<li>Reliability &amp; observability (fault tolerance, monitoring, incident learning)</li>\n</ul>\n<ul>\n<li>Networking &amp; data pipelines (data movement, caching, streaming efficiency)</li>\n</ul>\n<ul>\n<li>Systems for ML (training/inference performance, evaluation infrastructure, tooling)</li>\n</ul>\n<p>Most projects involve some of these steps:</p>\n<ul>\n<li>Defining a clear hypothesis (“we think X will reduce latency by Y under Z”)</li>\n</ul>\n<ul>\n<li>Instrumenting existing production systems, gathering metrics and detailed analysis to validate the hypothesis</li>\n</ul>\n<ul>\n<li>Building or modifying a real system (prototype or production-quality improvements when appropriate)</li>\n</ul>\n<ul>\n<li>Running experiments/benchmarks and analyzing results</li>\n</ul>\n<ul>\n<li>Communicating tradeoffs and recommendations clearly</li>\n</ul>\n<ul>\n<li>Publishing the research work in technical journals and conferences</li>\n</ul>\n<p><strong>Your background looks something like:</strong></p>\n<ul>\n<li>Currently pursuing a PhD in Computer Science, Computer Engineering, or relevant technical field</li>\n</ul>\n<ul>\n<li>Proficiency with Coding in c++, Java, python, rust, etc</li>\n</ul>\n<ul>\n<li>Doing ongoing research on systems topics such as DL/ML, information retrieval, systems security and cryptography, databases, networking, distributed systems, and compilers, etc</li>\n</ul>\n<ul>\n<li>Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_651a81aa-1bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/13a9e4e4-505b-4545-8b2b-b0bcc09c2b4f","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":"$67 per hour","x-skills-required":["c++","Java","python","rust","distributed systems","storage","compute","scheduling","performance engineering","reliability","observability","networking","data pipelines","systems for ML"],"x-skills-preferred":["DL/ML","information retrieval","systems security and cryptography","databases","networking","distributed systems","compilers"],"datePosted":"2026-03-06T18:22:48.817Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"c++, Java, python, rust, distributed systems, storage, compute, scheduling, performance engineering, reliability, observability, networking, data pipelines, systems for ML, DL/ML, information retrieval, systems security and cryptography, databases, networking, distributed systems, compilers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2370b50f-2e3"},"title":"Systems Software Engineer","description":"<p>We&#39;re looking for a Systems Software Engineer to join our team. As a Systems Software Engineer, you will work with other engineers across the game team to integrate, develop and debug core technologies and features in a large codebase, merging modern and legacy designs across multiple hardware architectures.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Work with other engineers across the game team to integrate, develop and debug core technologies and features in a large codebase, merging modern and legacy designs across multiple hardware architectures.</li>\n<li>Manage and optimize memory, load times, and performance.</li>\n<li>Debug a range of defects in development environments.</li>\n<li>Look for ways to increase team efficiency through automation, tooling, or workflow enhancements.</li>\n<li>Contribute to core EA technologies to promote collaborative development efforts.</li>\n<li>Work with technical and non-technical co-workers to create practical technical designs that meet players&#39; expectations.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>3+ years of C++ development experience.</li>\n<li>Ability to learn, test, debug, and extend other software engineer&#39;s code.</li>\n<li>Knowledge of software engineering and architectural design.</li>\n<li>Understanding of memory management, file systems, multi-core processing, and performance.</li>\n<li>Experience with profiling tools to monitor and diagnose issues.</li>\n<li>Experience with build systems, pipelines, and source control.</li>\n<li>Experience with codebases supporting multiple compilers and architectures.</li>\n<li>Experience communicating and collaborating with external team members or teams.</li>\n<li>Experience integrating and maintaining large-scale systems and legacy codebases, covering multiple disciplines.</li>\n<li>Experience with multiple programming languages (Python, Lua, C#).</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>As a Systems Software Engineer, you will play a critical role in shaping the future of interactive entertainment and creating the next great EA SPORTS game.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2370b50f-2e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Systems-Software-Engineer/210097","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,000 - $139,500 CAD","x-skills-required":["C++ development experience","software engineering and architectural design","memory management","file systems","multi-core processing","performance","profiling tools","build systems","pipelines","source control","codebases supporting multiple compilers and architectures","multiple programming languages"],"x-skills-preferred":["experience with automation","tooling","workflow enhancements","core EA technologies","collaborative development efforts","technical and non-technical co-workers","practical technical designs","players' expectations"],"datePosted":"2026-01-01T16:49:44.623Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++ development experience, software engineering and architectural design, memory management, file systems, multi-core processing, performance, profiling tools, build systems, pipelines, source control, codebases supporting multiple compilers and architectures, multiple programming languages, experience with automation, tooling, workflow enhancements, core EA technologies, collaborative development efforts, technical and non-technical co-workers, practical technical designs, players' expectations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":139500,"unitText":"YEAR"}}}]}