<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>372999e8-579</externalid>
      <Title>Senior Software Engineer II, AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer II on the AI Workload Orchestration team, you will help build and operate CoreWeave&#39;s Kubernetes-native platform for admitting, scheduling, and operating AI workloads at scale.</p>
<p>This platform integrates multiple orchestration and scheduling frameworks such as Kueue, Volcano, and Ray to support modern AI training and inference workflows. It complements SUNK (Slurm on Kubernetes) by providing a Kubernetes-first, cloud-native orchestration layer with deep platform integration.</p>
<p>You will own meaningful components of the platform, drive reliability and performance improvements, and help scale the system as customer demand and workload complexity continue to grow.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and operate Kubernetes-native services for AI workload orchestration and scheduling</li>
<li>Own one or more platform components end-to-end, including design, implementation, testing, and on-call support</li>
<li>Improve scheduling latency, cluster utilization, and workload reliability through metrics-driven engineering</li>
<li>Contribute to architectural discussions across services and influence design decisions within the platform</li>
<li>Work closely with adjacent teams (CKS, infrastructure, managed inference) to ensure clean interfaces and integrations</li>
<li>Mentor junior engineers and raise the quality bar for code, design, and operations</li>
</ul>
<p>About the role:</p>
<ul>
<li>5–8 years of professional software engineering experience in distributed systems, cloud infrastructure, or platform engineering</li>
<li>Strong experience building production systems in Go (Python or C++ a plus)</li>
<li>Solid understanding of Kubernetes fundamentals, APIs, controllers, and operating services in production</li>
<li>Experience working with scheduling, resource management, or quota-based systems</li>
<li>Proven ability to improve system reliability and performance using data and operational metrics</li>
<li>Comfortable owning services in production and participating in on-call rotations</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience with Kubernetes-native orchestration frameworks such as Kueue, Volcano, Ray, Kubeflow, or Argo Workflows</li>
<li>Familiarity with GPU-based workloads, ML training, or inference pipelines</li>
<li>Knowledge of scheduling concepts such as quota enforcement, pre-emption, and backfilling</li>
<li>Experience with reliability practices including SLOs, alerting, and incident response</li>
<li>Exposure to AI infrastructure, HPC, or large-scale distributed compute environments</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Go, Distributed systems, Cloud infrastructure, Platform engineering, Scheduling, Resource management, Quota-based systems, Kueue, Volcano, Ray, Kubeflow, Argo Workflows, GPU-based workloads, ML training, Inference pipelines, SLOs, Alerting, Incident response, AI infrastructure, HPC, Large-scale distributed compute environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647595006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>faffae87-882</externalid>
      <Title>Staff Software Engineer - GenAI Performance and Kernel</Title>
      <Description><![CDATA[<p>As a staff software engineer for GenAI Performance and Kernel, you will own the design, implementation, optimization, and correctness of the high-performance GPU kernels powering our GenAI inference stack. You will lead development of highly-tuned, low-level compute paths, manage trade-offs between hardware efficiency and generality, and mentor others in kernel-level performance engineering.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the design, implementation, benchmarking, and maintenance of core compute kernels optimized for various hardware backends (GPU, accelerators)</li>
<li>Driving the performance roadmap for kernel-level improvements: vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, auto-tuning, etc.</li>
<li>Integrating kernel optimizations with higher-level ML systems</li>
<li>Building and maintaining profiling, instrumentation, and verification tooling to detect correctness, performance regressions, numerical issues, and hardware utilization gaps</li>
<li>Leading performance investigations and root-cause analysis on inference bottlenecks, e.g. memory bandwidth, cache contention, kernel launch overhead, tensor fragmentation</li>
<li>Establishing coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend portability, and maintainability</li>
<li>Influencing system architecture decisions to make kernel improvements more effective (e.g. memory layout, dataflow scheduling, kernel fusion boundaries)</li>
<li>Mentoring and guiding other engineers working on lower-level performance, providing code reviews, and helping set best practices</li>
<li>Collaborating with infrastructure, tooling, and ML teams to roll out kernel-level optimizations into production, and monitoring their impact</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>Deep hands-on experience writing and tuning compute kernels (CUDA, Triton, OpenCL, LLVM IR, assembly or similar sort) for ML workloads</li>
<li>Strong knowledge of GPU/accelerator architecture: warp structure, memory hierarchy (global, shared, register, L1/L2 caches), tensor cores, scheduling, SM occupancy, etc.</li>
<li>Experience with advanced optimization techniques: tiling, blocking, software pipelining, vectorization, fusion, loop transformations, auto-tuning</li>
<li>Familiarity with ML-specific kernel libraries (cuBLAS, cuDNN, CUTLASS, oneDNN, etc.) or open kernels</li>
<li>Strong debugging and profiling skills (Nsight, NVProf, perf, vtune, custom instrumentation)</li>
<li>Experience reasoning about numerical stability, mixed precision, quantization, and error propagation</li>
<li>Experience in integrating optimized kernels into real-world ML inference systems; exposure to distributed inference pipelines, memory management, and runtime systems</li>
<li>Experience building high-performance products leveraging GPU acceleration</li>
<li>Excellent communication and leadership skills , able to drive design discussions, mentor colleagues, and make trade-offs visible</li>
<li>A track record of shipping performance-critical, high-quality production software</li>
<li>Bonus: published in systems/ML performance venues (e.g. MLSys, ASPLOS, ISCA, PPoPP), experience with custom accelerators or FPGA, experience with sparsity or model compression techniques</li>
</ul>
<p>The pay range for this role is $190,900-$232,800 USD per year, depending on location and experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,900-$232,800 USD per year</Salaryrange>
      <Skills>Compute kernels, GPU/accelerator architecture, Advanced optimization techniques, ML-specific kernel libraries, Debugging and profiling skills, Numerical stability, Mixed precision, Quantization, Error propagation, Distributed inference pipelines, Memory management, Runtime systems, High-performance products, GPU acceleration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8202700002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>93a4ece6-182</externalid>
      <Title>Member of Technical Staff, Site Reliability Engineer (HPC)</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for experienced individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. We&#39;re looking for an experienced HPC Site Reliability Engineer (SRE) to join our High Performance Computing (HPC) infrastructure team. In this role, you&#39;ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You&#39;ll ensure that AI systems stay efficient and reliable with very high uptimes.</p>
<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI&#39;s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being.</p>
<p>Responsibilities
Reliability &amp; Availability : Ensure uptime, resiliency, and fault tolerance of HPC clusters powering MAI model training and inference.
Observability : Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into all aspects of HPC systems including GPU, clusters, storage and networking.
Automation &amp; Tooling : Build automation for deployments, incident response, scaling, and failover in CPU+GPU environments.
Incident Management : Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.
Security &amp; Compliance : Ensure data privacy, compliance, and secure operations across model training and serving environments.
Collaboration : Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>
<p>Qualifications
Required Qualifications
Master’s Degree in Computer Science, Information Technology, or related field AND 2+ years technical experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering OR Bachelor’s Degree in Computer Science, Information Technology, or related field AND 4+ years technical experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering OR equivalent experience</p>
<p>Preferred Qualifications
Strong proficiency in Kubernetes, Docker, and container orchestration.
Knowledge of CI/CD pipelines for Inference and ML model deployment.
Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code.
Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.).
Strong programming/scripting skills in Python, Go, or Bash.
Solid knowledge of distributed systems, networking, and storage.
Experience running large-scale GPU clusters for ML/AI workloads (preferred).
Familiarity with ML training/inference pipelines.
Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).
Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>
<p>Work on cutting-edge infrastructure that powers the future of Generative AI. Collaborate with world-class researchers and engineers. Impact millions of users through reliable and responsible AI deployments. Competitive compensation, equity options, and comprehensive benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Kubernetes, Docker, container orchestration, CI/CD pipelines, public cloud platforms, infrastructure-as-code, monitoring &amp; observability tools, programming/scripting skills in Python, Go, or Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, strong proficiency in Kubernetes, knowledge of CI/CD pipelines, hands-on experience with public cloud platforms, expertise in monitoring &amp; observability tools, strong programming/scripting skills in Python, Go, or Bash, solid knowledge of distributed systems, experience running large-scale GPU clusters, familiarity with ML training/inference pipelines, experience with high-performance computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-site-reliability-engineer-hpc-mai-superintelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>dd6e85f0-7a8</externalid>
      <Title>Software Engineer, Monetization AI/ML</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Monetization team is a new cross-functional group working across engineering, product, research, and design to build the foundational systems that will help OpenAI scale access to intelligence responsibly. Our mission is to develop user-first, privacy-preserving monetization products—including next-generation ads experiences—that strengthen user trust, unlock economic opportunity, and support OpenAI’s long-term innovation.</p>
<p>Monetization plays a critical role in enabling OpenAI to continue pushing the boundaries of AI capabilities while ensuring the benefits of AGI are broadly shared. We believe monetization must be aligned with user value, uphold rigorous privacy and safety standards, and sustain a healthy ecosystem of developers and businesses.</p>
<p>This team operates in a greenfield environment and moves quickly through prototyping, experimentation, and iterative deployment. We partner closely with Product, Design, and Research to bring research breakthroughs into real-world systems at global scale.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We’re looking for experienced Software Engineers to help build OpenAI’s foundational ads ranking and recommendation systems—the ML and AI platforms that determine how monetized experiences are selected, ordered, and optimized across OpenAI products.</p>
<p>In this role, you’ll architect and implement large-scale, high-performance ML-driven systems with rigorous requirements around latency, correctness, safety, privacy, and continuous improvement. You’ll work on modern, transformer- and LLM-inspired architectures that move beyond traditional feature engineering toward more expressive, context-aware decisioning. Your work will have a direct revenue impact and make ChatGPT and other products accessible to more people with fewer usage limits or without having to pay.</p>
<p>This is a deeply technical, 0→1 founding-stage role where you’ll operate across backend engineering, systems design, and applied AI/ML to help define the next generation of AI-native monetization and recommendation platforms.</p>
<p>This role is exclusively based across our San Francisco and Seattle sites. We offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Architect, build, and evolve large-scale ads ranking and recommendation systems using modern ML and AI techniques</li>
</ul>
<ul>
<li>Design and productionize LLM- and transformer-inspired models that leverage sequential signals, long-horizon context, and sparse or delayed feedback.</li>
</ul>
<ul>
<li>Develop model-driven decision logic and inference pipelines that operate under real-world constraints around performance, reliability, and privacy.</li>
</ul>
<ul>
<li>Partner closely with Product, Design, and Research to define requirements and translate ambiguous product goals into scalable ML systems.</li>
</ul>
<ul>
<li>Prototype, experiment, and rapidly iterate on new model architectures and training approaches to improve relevance, quality, and efficiency.</li>
</ul>
<ul>
<li>Build services and infrastructure that support training, evaluation, online inference, and continuous optimization of ML models.</li>
</ul>
<ul>
<li>Establish strong measurement, experimentation, and debugging practices to understand model behavior and system-level outcomes.</li>
</ul>
<ul>
<li>Contribute to technical strategy and help shape the long-term evolution of OpenAI’s monetization and recommendation stack.</li>
</ul>
<ul>
<li>Embed safety, fairness, and policy considerations directly into model design and system architecture from first principles.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years of experience building and scaling ML-powered systems in production environments.</li>
</ul>
<ul>
<li>Have worked on ranking, recommendation, ads, marketplaces, or large-scale ML inference systems.</li>
</ul>
<ul>
<li>Are comfortable operating across the full stack — from model development to backend services and production deployment.</li>
</ul>
<ul>
<li>Enjoy deeply technical 0→1 problem spaces where architecture, strategy, and implementation overlap.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>Software Engineer, Machine Learning, Artificial Intelligence, Full Stack Development, Backend Engineering, Systems Design, Applied AI/ML, Transformer- and LLM-inspired architectures, Sequential signals, Long-horizon context, Sparse or delayed feedback, Model-driven decision logic, Inference pipelines, Real-world constraints, Performance, Reliability, Privacy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in developing and applying artificial intelligence (AI) to help humans learn, work, and create. It was founded in 2015 and has since become one of the leading AI research and development companies in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a7b192bf-3d2e-4acb-9c97-fad3de0609db</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ba52acc3-4fd</externalid>
      <Title>Engineering Site Lead</Title>
      <Description><![CDATA[<p>We&#39;re seeking an exceptional Site Lead to establish and scale our London office. This is a unique opportunity to shape Perplexity&#39;s presence in one of the world&#39;s leading tech hubs, building teams and culture from the ground up while driving technical excellence in infrastructure and AI systems.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>As Site Lead, you&#39;ll serve as the face of Perplexity in London, responsible for building our technical organization, fostering a world-class engineering culture, and directly managing one or more infrastructure teams. You&#39;ll report to senior leadership and work cross-functionally with teams across our global footprint.</p>
<p><strong>What you need</strong></p>
<ul>
<li>10+ years of experience in software engineering with 5+ years in infrastructure, cloud infrastructure, or AI infrastructure roles</li>
<li>3+ years of people management experience, including building and scaling teams</li>
<li>Proven track record of establishing or significantly growing an engineering site or office</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, cloud platforms, infrastructure automation, GPU infrastructure and orchestration, ML training and inference pipelines, Model serving and deployment at scale, Kubernetes, Terraform, container orchestration, CI/CD systems, experience at companies focused on AI/ML, search, or large-scale consumer applications, previous experience as a site lead, office lead, or similar multi-team leadership role, background in building infrastructure for LLM training or inference, contributions to open-source infrastructure or AI infrastructure projects, experience scaling teams from 0 to 20+ engineers, active involvement in the London or European tech community</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.com.png</Employerlogo>
      <Employerdescription>Perplexity is revolutionizing how people discover and interact with information through AI-powered search and knowledge tools. As we expand our global footprint, we&apos;re establishing a strategic presence in London to drive innovation and growth across Europe.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/638e6823-be7f-46c6-9675-7b1197fc9b8c</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-04</Postedate>
    </job>
  </jobs>
</source>