{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/gpu-architecture"},"x-facet":{"type":"skill","slug":"gpu-architecture","display":"Gpu Architecture","count":11},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f0f66ce3-d78"},"title":"Senior GenAI Research Engineer - Optimization and Kernels","description":"<p>As a research engineer on the Scaling team at Databricks, you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art.</p>\n<p>You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. Your goal will be to make our customers successful in applying state-of-the-art LLMs and AI systems, and we encode our scientific expertise into our products to make that possible.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Driving performance improvements through advanced optimization techniques including kernel fusion, mixed precision, memory layout optimization, tiling strategies, and tensorization for training-specific patterns</li>\n</ul>\n<ul>\n<li>Designing, implementing, and optimizing high-performance GPU kernels for training workloads (e.g., attention mechanisms, custom layers, gradient computation, activation functions) targeting NVIDIA architectures</li>\n</ul>\n<ul>\n<li>Designing and implementing distributed training frameworks for large language models, including parallelism strategies (data, tensor, pipeline, ZeRO-based) and optimized communication patterns for gradient synchronization and collective operations</li>\n</ul>\n<ul>\n<li>Profiling, debugging, and optimizing end-to-end training workflows to identify and resolve performance bottlenecks, applying memory optimization techniques like activation checkpointing, gradient sharding, and mixed precision training</li>\n</ul>\n<p>We look for candidates with a strong background in computer science or a related field, hands-on experience writing and tuning CUDA kernels for ML training applications, and a deep understanding of parallelism techniques and memory optimization strategies for large-scale model training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f0f66ce3-d78","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8297797002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["CUDA","NVIDIA GPU architecture","PyTorch","distributed training frameworks","parallelism techniques","memory optimization strategies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:26.571Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CUDA, NVIDIA GPU architecture, PyTorch, distributed training frameworks, parallelism techniques, memory optimization strategies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f1a00bea-138"},"title":"R&D Engineering, Staff Engineer (EDA, GPU Acceleration)","description":"<p>We Are:</p>\n<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>\n<p>You Are:</p>\n<p>You are an accomplished engineering leader with over 3-6 years of experience in developing large-scale applications, particularly within the EDA domain. Your expertise spans the entire lifecycle of solution development,from initial specification to hands-on implementation, customer engagement, and iterative refinement. You thrive in environments that demand both deep technical prowess and strong leadership, and you are passionate about mentoring the next generation of engineers. Your background in C/C++ development is robust, and you approach new languages and technologies with curiosity and adaptability, making you an ideal fit for our dynamic team. You having experience with CUDA, GPU acceleration and GPU architecture knowledge is plus.</p>\n<p>What You’ll Be Doing:</p>\n<p>Enabling GPU Acceleration for Fusion Compiler entire R2G flow. This includes GPU acceleration of each of the engine in R2G flow with new problem formulation to take advantage of GPU architectures. These engines include placement, global routing, detail routing, CTS, optimization, timer, extraction, legalizer and synthesis.</p>\n<p>Owning projects end-to-end,from requirements gathering and design specification to development, testing, and customer interaction,ensuring high-quality deliverables.</p>\n<p>Collaborating closely with cross-functional teams, including product management and product engineering.</p>\n<p>The Impact You Will Have:</p>\n<p>Delivering  GPU Accelerated Fusion Compiler, which will be game changing for chip design and implementation steps by reducing flow cycle times from weeks to days (or hours).</p>\n<p>Empowering Synopsys customers to achieve faster turn around time and accelerating their design cycles and reducing time to market.</p>\n<p>Elevating the technical excellence of the team by sharing best practices, fostering a culture of learning, and mentoring future leaders.</p>\n<p>Shaping the roadmap for Digital Implementation solutions, ensuring that Synopsys remains at the forefront of EDA technology.</p>\n<p>What You’ll Need:</p>\n<p>Minimum 3-6 years of hands-on experience in developing software projects, preferably in EDA or semiconductor domains.</p>\n<p>Expert proficiency in C/C++ development, with a proven track record of delivering robust, scalable solutions.</p>\n<p>Experience with physical design, placement, and routing flows in EDA tools.</p>\n<p>Experience with CUDA, GPU acceleration and GPU architecture knowledge is plus.</p>\n<p>Strong knowledge of software architecture, Design Thinking, and use of design patterns.</p>\n<p>Excellent communication skills for technical interactions.</p>\n<p>Who You Are:</p>\n<p>Innovative thinker who embraces new technologies and methodologies.</p>\n<p>Strong problem solver with a strategic mindset and attention to detail.</p>\n<p>Effective communicator, able to translate complex technical concepts for diverse audiences.</p>\n<p>Collaborative team player, eager to contribute and learn from others.</p>\n<p>Adaptable and resilient in the face of evolving challenges and requirements.</p>\n<p>The Team You’ll Be A Part Of:</p>\n<p>You’ll join the Fusion Compiler GPU Acceleration team in Synopsys Sunnyvale, CA (or Hillsboro, OR), a group of passionate engineers focused on developing industry-first and game changing GPU Accelerated Digital Implementation solution. This development is part of Nvidia/Synopsys GPU Acceleration collaboration. This team  is driving innovation in EDA and empowering customers worldwide by accelerating their design cycles and reducing time to market.</p>\n<p>Rewards and Benefits:</p>\n<p>We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>\n<p>At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.</p>\n<p>In addition to the base salary, this role may be eligible for an annual bonus, equity, and other discretionary bonuses. Synopsys offers comprehensive health, wellness, and financial benefits as part of a competitive total rewards package. The actual compensation offered will be based on a number of job-related factors, including location, skills, experience, and education. Your recruiter can share more specific details on the total rewards package upon request. The base salary range for this role is across the U.S.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f1a00bea-138","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/sunnyvale/r-and-d-engineering-staff-engineer-eda-gpu-acceleration/44408/93189758192","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$138000-$207000","x-skills-required":["C/C++ development","CUDA","GPU acceleration","GPU architecture knowledge","Physical design","Placement","Routing flows","Software architecture","Design Thinking","Use of design patterns"],"x-skills-preferred":[],"datePosted":"2026-04-05T13:20:37.265Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++ development, CUDA, GPU acceleration, GPU architecture knowledge, Physical design, Placement, Routing flows, Software architecture, Design Thinking, Use of design patterns","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":138000,"maxValue":207000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af442d9f-834"},"title":"Senior AI Developer Technology Engineer, Financial Sector","description":"<p>We&#39;re seeking a Senior AI Developer Technology Engineer to help shape the future of financial AI and data analytics by designing and optimizing parallel algorithms on cutting-edge computing platforms. You will research and develop techniques to GPU-accelerate high-performance workloads at the intersection of AI and financial markets. You will work directly with other technical experts in their fields to perform in-depth analysis and optimization of complex AI and HPC workloads to ensure the best possible performance on modern CPU and GPU architectures. You will publish and present discovered optimization techniques in developer blogs or relevant conferences to engage and educate the Developer community. You will influence the design of next-generation hardware architectures, software, and programming models in collaboration with research, hardware, system software, libraries, and tools teams at NVIDIA.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Research and develop techniques to GPU-accelerate high-performance workloads at the intersection of AI and financial markets.</li>\n<li>Work directly with other technical experts in their fields to perform in-depth analysis and optimization of complex AI and HPC workloads to ensure the best possible performance on modern CPU and GPU architectures.</li>\n<li>Publish and present discovered optimization techniques in developer blogs or relevant conferences to engage and educate the Developer community.</li>\n<li>Influence the design of next-generation hardware architectures, software, and programming models in collaboration with research, hardware, system software, libraries, and tools teams at NVIDIA.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>An advanced degree in Computer Science, Computer Engineering, or related computationally focused science degree (or equivalent experience).</li>\n<li>5+ years of relevant work or research experience.</li>\n<li>Direct experience improving the performance of large computational applications used by financial institutions.</li>\n<li>Excellent understanding of linear algebra.</li>\n<li>Programming fluency in C/C++ with a deep understanding of algorithms and software design.</li>\n<li>Hands-on experience with low-level parallel programming, e.g., CUDA, OpenACC, OpenMP, MPI, pthreads, TBB, etc.</li>\n<li>In-depth expertise with CPU/GPU architecture fundamentals.</li>\n<li>Good communication and organization skills, with a logical approach to problem solving, and prioritization skills.</li>\n</ul>\n<p><strong>Ways to stand out from the crowd:</strong></p>\n<ul>\n<li>A Master’s or PhD in a relevant field is highly valued.</li>\n<li>Prior work experience in capital markets with exposure to systematic/algorithmic strategies and quantitative trading.</li>\n<li>Experience with parallelizing and optimizing machine learning algorithms like decision trees, time series, and Monte Carlo simulations.</li>\n<li>Deep knowledge of financial data models, pricing/risk simulation algorithms, portfolio optimization, or other financial specific applications/ services.</li>\n<li>Have developed ML/DL techniques in the finance space, such as stock market prediction, fraud detection, portfolio optimization/selection.</li>\n</ul>\n<p>You will also be eligible for equity and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af442d9f-834","directApply":true,"hiringOrganization":{"@type":"Organization","name":"NVIDIA","sameAs":"https://nvidia.wd5.myworkdayjobs.com","logo":"https://logos.yubhub.co/nvidia.com.png"},"x-apply-url":"https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/US-CA-Santa-Clara/Senior-AI-Developer-Technology-Engineer--Financial-Sector_JR2013482","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C/C++","CUDA","OpenACC","OpenMP","MPI","pthreads","TBB","CPU/GPU architecture fundamentals","Linear algebra","Parallel programming"],"x-skills-preferred":["Machine learning","Deep learning","Financial data models","Pricing/risk simulation algorithms","Portfolio optimization"],"datePosted":"2026-03-09T20:46:22.712Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Santa Clara, Remote, New York"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, CUDA, OpenACC, OpenMP, MPI, pthreads, TBB, CPU/GPU architecture fundamentals, Linear algebra, Parallel programming, Machine learning, Deep learning, Financial data models, Pricing/risk simulation algorithms, Portfolio optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a51375e8-30e"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost. Our work spans today&#39;s frontier AI workloads and directly shapes the next generation of accelerators, system architectures, and large-scale AI platforms. We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures. This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale. In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>About the Team</p>\n<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>\n<p>Microsoft Superintelligence Team</p>\n<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in—come and join us as we work on our next generation of models!</p>\n<p>Responsibilities</p>\n<p>Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks. Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements. Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems. Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps. Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations. Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs. Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams. Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a51375e8-30e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["Experience designing or operating large-scale AI clusters for training or inference","Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications","Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Background in performance modeling and capacity planning for future hardware generations","Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:18:41.443Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, Experience designing or operating large-scale AI clusters for training or inference, Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications, Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Background in performance modeling and capacity planning for future hardware generations, Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd1a0d16-311"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost.</p>\n<p>We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures.</p>\n<p>This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale.</p>\n<p>In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>Microsoft Superintelligence Team\nMicrosoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>\n<p>Responsibilities\nLead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks.</p>\n<p>Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements.</p>\n<p>Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems.</p>\n<p>Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps.</p>\n<p>Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations.</p>\n<p>Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs.</p>\n<p>Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams.</p>\n<p>Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p>Qualifications\nBachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Additional or Preferred Qualifications\nMaster’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Strong background in one or more of the following areas: AI accelerator or GPU architectures Distributed systems and large-scale AI training/inference High-performance computing (HPC) and collective communications ML systems, runtimes, or compilers Performance modeling, benchmarking, and systems analysis Hardware–software co-design for AI workloads Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development.</p>\n<p>Proven ability to work across organizational boundaries and influence technical decisions involving multiple stakeholders. Experience designing or operating large-scale AI clusters for training or inference. Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications. Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand). Background in performance modeling and capacity planning for future hardware generations. Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews. Publications, patents, or open-source contributions in systems, architecture, or ML systems are a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd1a0d16-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["LLMs, multimodal models, or recommendation systems, and their systems-level implications","Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Performance modeling and capacity planning for future hardware generations","Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:13:30.666Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, LLMs, multimodal models, or recommendation systems, and their systems-level implications, Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Performance modeling and capacity planning for future hardware generations, Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a34364f-8c5"},"title":"Member of Technical Staff, Hardware Health","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Hardware Health, to ensure these systems deliver sustained reliability, performance, and availability across exascale-class deployments.</p>\n<p><strong>About the Role</strong></p>\n<p>We work closely with research, hardware, datacenter, and platform engineering teams to develop predictive health models, failure detection frameworks, and autonomous remediation systems that keep our AI clusters operating at frontier scale. Our team is responsible for Copilot, Bing, Edge, and generative AI research.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design and develop next-generation hardware health monitoring and diagnostic frameworks for large GPU clusters (NVL16/NVL72/GB200+ scale).</li>\n<li>Build predictive analytics pipelines leveraging telemetry, power, and thermal data to anticipate hardware degradation and systemic issues.</li>\n<li>Collaborate with silicon, firmware, and datacenter engineers to identify root causes and remediate large-scale hardware anomalies.</li>\n<li>Define system health KPIs (e.g., NIS/RIS, MTBF, failure domain analysis) and integrate them into real-time observability platforms.</li>\n<li>Lead incident triage for high-impact GPU, network, and cooling issues across distributed clusters.</li>\n<li>Drive automation in health management to reduce manual intervention to the top 5% of anomalies.</li>\n<li>Partner with cross-functional teams to influence hardware design for reliability, thermal efficiency, and serviceability.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience working with large-scale HPC or GPU systems (NVIDIA H100/GB200 or equivalent).</li>\n<li>Deep understanding of GPU architecture, high-speed interconnects (NVLink, InfiniBand, RoCE), and large datacenter topologies.</li>\n<li>Proficiency in hardware telemetry, diagnostics, or failure analysis tools.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong analytical and problem-solving skills.</li>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a34364f-8c5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-hardware-health-mai-superintelligence-team-5/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","GPU architecture","high-speed interconnects","hardware telemetry","diagnostics","failure analysis tools"],"x-skills-preferred":["experience working with large-scale HPC or GPU systems","deep understanding of GPU architecture","proficiency in hardware telemetry","diagnostics","failure analysis tools"],"datePosted":"2026-03-06T07:33:03.791Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, GPU architecture, high-speed interconnects, hardware telemetry, diagnostics, failure analysis tools, experience working with large-scale HPC or GPU systems, deep understanding of GPU architecture, proficiency in hardware telemetry, diagnostics, failure analysis tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4ed1b6aa-4e5"},"title":"Member of Technical Staff, Hardware Health","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Hardware Health, to ensure these systems deliver sustained reliability, performance, and availability across exascale-class deployments.</p>\n<p><strong>About the Role</strong></p>\n<p>We work closely with research, hardware, datacenter, and platform engineering teams to develop predictive health models, failure detection frameworks, and autonomous remediation systems that keep our AI clusters operating at frontier scale. Our team is responsible for Copilot, Bing, Edge, and generative AI research.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design and develop next-generation hardware health monitoring and diagnostic frameworks for large GPU clusters (NVL16/NVL72/GB200+ scale).</li>\n<li>Build predictive analytics pipelines leveraging telemetry, power, and thermal data to anticipate hardware degradation and systemic issues.</li>\n<li>Collaborate with silicon, firmware, and datacenter engineers to identify root causes and remediate large-scale hardware anomalies.</li>\n<li>Define system health KPIs (e.g., NIS/RIS, MTBF, failure domain analysis) and integrate them into real-time observability platforms.</li>\n<li>Lead incident triage for high-impact GPU, network, and cooling issues across distributed clusters.</li>\n<li>Drive automation in health management to reduce manual intervention to the top 5% of anomalies.</li>\n<li>Partner with cross-functional teams to influence hardware design for reliability, thermal efficiency, and serviceability.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience working with large-scale HPC or GPU systems (NVIDIA H100/GB200 or equivalent).</li>\n<li>Deep understanding of GPU architecture, high-speed interconnects (NVLink, InfiniBand, RoCE), and large datacenter topologies.</li>\n<li>Proficiency in hardware telemetry, diagnostics, or failure analysis tools.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong analytical and problem-solving skills.</li>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4ed1b6aa-4e5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-hardware-health-mai-superintelligence-team-4/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","GPU architecture","high-speed interconnects","hardware telemetry","diagnostics","failure analysis tools"],"x-skills-preferred":["machine learning","predictive analytics","autonomous remediation systems"],"datePosted":"2026-03-06T07:32:40.802Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, GPU architecture, high-speed interconnects, hardware telemetry, diagnostics, failure analysis tools, machine learning, predictive analytics, autonomous remediation systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_356892b1-542"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer at their Suzhou office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking an expert Senior GPU Engineer to join our AI Infrastructure team. In this role, you will architect and optimize the core inference engine that powers our large-scale AI models. You will be responsible for pushing the boundaries of hardware performance, reducing latency, and maximizing throughput for Generative AI and Deep Learning workloads. You will work at the intersection of Deep Learning algorithms and low-level hardware, designing custom operators and building a highly efficient training/inference execution engine from the ground up.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Custom Operator Development: Design and implement highly optimized GPU kernels (CUDA/Triton) for critical deep learning operations (e.g., FlashAttention, GEMM, LayerNorm) to outperform standard libraries.</li>\n<li>Inference Engine Architecture: Contribute to the development of our high-performance inference engine, focusing on graph optimizations, operator fusion, and dynamic memory management (e.g., KV Cache optimization).</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Expertise in the CUDA programming model and NVIDIA GPU architectures (specifically Ampere/Hopper).</li>\n<li>Deep understanding of the memory hierarchy (Shared Memory, L2 cache, Registers), warp-level primitives, occupancy optimization, and bank conflict resolution.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proven ability to navigate and modify complex, large-scale codebases (e.g., PyTorch internals, Linux kernel).</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Starting January 26, 2026, Microsoft AI employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>\n<li>Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or protected veteran status.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_356892b1-542","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-18/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","CUDA","NVIDIA GPU architectures","Deep Learning algorithms","low-level hardware"],"x-skills-preferred":["PyTorch","Linux kernel"],"datePosted":"2026-03-06T07:26:27.271Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Suzhou"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, CUDA, NVIDIA GPU architectures, Deep Learning algorithms, low-level hardware, PyTorch, Linux kernel"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4054dca1-a4f"},"title":"AI Inference Engineer","description":"<p>We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Develop APIs for AI inference that will be used by both internal and external customers.</p>\n<ul>\n<li>Develop APIs for AI inference that will be used by both internal and external customers</li>\n<li>Benchmark and address bottlenecks throughout our inference stack</li>\n<li>Improve the reliability and observability of our systems and respond to system outages</li>\n<li>Explore novel research and implement LLM inference optimizations</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)</li>\n<li>Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)</li>\n<li>Understanding of GPU architectures or experience with GPU kernel programming using CUDA</li>\n</ul>\n<p><strong>Why this matters</strong></p>\n<p>As an AI Inference engineer, you will play a critical role in the development and deployment of our machine learning models. Your work will have a direct impact on the performance and reliability of our systems, and will help us to continue to innovate and improve our products.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4054dca1-a4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/e4777627-ff8f-4257-8612-3a016bb58592","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Final offer amounts are determined by multiple factors, including, experience and expertise.","x-skills-required":["ML systems","deep learning frameworks","GPU architectures"],"x-skills-preferred":["LLM architectures","inference optimization techniques"],"datePosted":"2026-03-04T12:27:20.012Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, deep learning frameworks, GPU architectures, LLM architectures, inference optimization techniques"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e37be4c0-4be"},"title":"AI Inference Engineer","description":"<p>Perplexity is looking for an AI Inference Engineer to join their team. The successful candidate will be responsible for developing APIs for AI inference, benchmarking and addressing bottlenecks throughout the inference stack, improving the reliability and observability of systems, and exploring novel research and implementing LLM inference optimisations.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As an AI Inference Engineer at Perplexity, you will have the opportunity to work on large-scale deployment of machine learning models for real-time inference. You will be responsible for developing APIs for AI inference that will be used by both internal and external customers.</p>\n<ul>\n<li>Develop APIs for AI inference that will be used by both internal and external customers</li>\n<li>Benchmark and address bottlenecks throughout our inference stack</li>\n<li>Improve the reliability and observability of our systems and respond to system outages</li>\n<li>Explore novel research and implement LLM inference optimisations</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need to have experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX), familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.), and understanding of GPU architectures or experience with GPU kernel programming using CUDA.</p>\n<ul>\n<li>Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)</li>\n<li>Familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.)</li>\n<li>Understanding of GPU architectures or experience with GPU kernel programming using CUDA</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e37be4c0-4be","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://www.perplexity.ai/","logo":"https://logos.yubhub.co/perplexity.ai.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/8a976851-9bef-4b07-8d36-567fa9540aef","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$220K – $405K","x-skills-required":["ML systems","deep learning frameworks","LLM architectures","inference optimisation techniques","GPU architectures","GPU kernel programming"],"x-skills-preferred":["continuous batching","quantisation","PyTorch","TensorFlow","ONNX"],"datePosted":"2026-03-04T12:24:24.046Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, New York City, Palo Alto"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, deep learning frameworks, LLM architectures, inference optimisation techniques, GPU architectures, GPU kernel programming, continuous batching, quantisation, PyTorch, TensorFlow, ONNX","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93d67895-5aa"},"title":"Diretor de arte técnica","description":"<p>We are seeking a talented Director of Technical Art to join our team and help shape the visual and technical direction of our mobile experiences. As a key member of our Art &amp; Animation team, you will be responsible for leading and guiding a team of technical artists to deliver high-quality visuals that meet our game&#39;s artistic vision and technical requirements.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Lead and guide a team of technical artists to deliver high-quality visuals that meet our game&#39;s artistic vision and technical requirements.</li>\n<li>Develop and maintain pipelines for materials, lighting, and rendering that support mobile and high-end platforms.</li>\n<li>Create tools and workflows that enable efficient iteration and validation across various device profiles.</li>\n<li>Establish and maintain standards for asset optimization, texture compression, and draw call management.</li>\n<li>Collaborate with gameplay, animation, and visual effects teams to ensure that visual resources contribute to gameplay legibility and responsiveness.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Proven experience in leading or directing technical art for real-time projects in mobile or multi-platform pipelines.</li>\n<li>Deep knowledge of Unreal Engine rendering pipelines, materials, and profiling tools.</li>\n<li>Strong technical foundation in shaders, lighting, performance optimization, and asset validation.</li>\n<li>Experience with advanced rendering techniques (e.g., dynamic lighting, post-processing, material layers, and virtualized geometry).</li>\n<li>Ability to develop tools and workflows that vary from stylized to realistic objectives.</li>\n<li>Familiarity with mobile GPU architectures, memory constraints, and performance considerations across platforms.</li>\n<li>Strong collaboration and communication skills in all areas.</li>\n<li>Ability to lead and develop technical art teams, promoting creative and technical excellence.</li>\n<li>Experience with Unreal Engine 5 (Lumen, Nanite, virtual shadow maps) and strategies for scaling them across various hardware levels.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93d67895-5aa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5710691004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Unreal Engine","Technical Art","Leadership","Collaboration","Communication"],"x-skills-preferred":["Shaders","Lighting","Performance Optimization","Asset Validation","Mobile GPU Architectures"],"datePosted":"2026-01-08T03:13:40.726Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Porto Alegre, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Unreal Engine, Technical Art, Leadership, Collaboration, Communication, Shaders, Lighting, Performance Optimization, Asset Validation, Mobile GPU Architectures"}]}