<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8cceb431-49c</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As an Engineering Manager on the Infrastructure team at Cursor, you&#39;ll lead the team that owns the foundational cloud, networking, storage, and compute layer that every service runs on: network foundations, container orchestration, edge and security infrastructure, data storage systems, and the compute runtimes that power production.</p>
<p>Cursor is one of the fastest-growing developer tools in the world, and you&#39;ll drive the cost management, regional deployment strategy, and infrastructure unification that make that growth possible. When your team&#39;s systems work well, every team is more productive, every product surface is more reliable, and Cursor can expand to serve developers everywhere.</p>
<p>You&#39;ll set technical direction, write and review code, and lead a team of strong infrastructure engineers, balancing hands-on contribution with growing your team&#39;s impact.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Owning Kubernetes and cluster foundations: building and operating production clusters with proper service mesh, scaling, and ingress that teams can confidently deploy to.</li>
</ul>
<ul>
<li>Designing the geo-deployment architecture: building a replicable, robust process for deploying geo-replicated services across cloud regions and providers.</li>
</ul>
<ul>
<li>Building edge and security infrastructure: designing the networking and security layer at the edge to protect against abuse, manage rate limiting, and optimize traffic routing.</li>
</ul>
<ul>
<li>Owning data storage strategy: leading the team&#39;s work on Postgres, OLAP systems, and caching layers, ensuring our storage infrastructure is reliable, performant, and scales with the product.</li>
</ul>
<ul>
<li>Owning cost management and optimization: building attribution systems, identifying waste, and ensuring we&#39;re making smart tradeoffs between cost and reliability across all cloud spend.</li>
</ul>
<ul>
<li>Unifying the compute platform: defining a single, opinionated container orchestration strategy so every team gets consistent, reliable deployments out of the box.</li>
</ul>
<ul>
<li>Hiring and growing the team: sourcing, interviewing, and closing top infrastructure talent, while developing your engineers through coaching, mentorship, and high-leverage project assignments.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have led engineering teams building and operating production infrastructure or platform systems at scale.</li>
</ul>
<ul>
<li>You have deep experience with AWS (or comparable cloud providers), especially VPC networking, EKS/K8s, and IAM/account management.</li>
</ul>
<ul>
<li>You&#39;ve built and operated production Kubernetes clusters at scale, including service mesh, autoscaling, and multi-region deployments.</li>
</ul>
<ul>
<li>You have strong opinions on databases, storage engines, caching, and schema design, and understand the tradeoffs between performance, consistency, and cost.</li>
</ul>
<ul>
<li>You understand edge networking, CDN/WAF architectures, and traffic management at the infrastructure level.</li>
</ul>
<ul>
<li>You care about infrastructure-as-code, reproducibility, and making it easy for other teams to self-serve reliable infrastructure.</li>
</ul>
<ul>
<li>Experience with cost optimization at scale, infrastructure migration/unification, or data storage systems (Postgres, ClickHouse, OLAP) is a plus.</li>
</ul>
<p><strong>Salary</strong></p>
<p>$150,000 - $200,000 per year</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>AWS (or comparable cloud providers)</li>
<li>VPC networking</li>
<li>EKS/K8s</li>
<li>IAM/account management</li>
<li>Kubernetes</li>
<li>Service mesh</li>
<li>Autoscaling</li>
<li>Multi-region deployments</li>
<li>Databases</li>
<li>Storage engines</li>
<li>Caching</li>
<li>Schema design</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Cost optimization at scale</li>
<li>Infrastructure migration/unification</li>
<li>Data storage systems (Postgres, ClickHouse, OLAP)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 - $200,000 per year</Salaryrange>
      <Skills>AWS, VPC networking, EKS/K8s, IAM/account management, Kubernetes, Service mesh, Autoscaling, Multi-region deployments, Databases, Storage engines, Caching, Schema design, Cost optimization at scale, Infrastructure migration/unification, Data storage systems (Postgres, ClickHouse, OLAP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a developer tools company, one of the fastest-growing in the world.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/engineering-manager-infrastructure</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>98858623-456</externalid>
      <Title>ML Platform Engineer, tvScientific</Title>
      <Description><![CDATA[<p>We&#39;re looking for an ambitious Systems / Platform Engineer to join a team at the intersection of SRE and low-latency distributed systems. This team will help power Pinterest&#39;s next generation of realtime ML and measurement infrastructure, with a focus on sub-millisecond decisioning, high-throughput data access, and tight integration with Pinterest&#39;s core tech stack.</p>
<p>In this role, you&#39;ll think about queries and RPCs in terms of syscalls, cache lines, and wire formats, and design systems that stay fast and predictable under load. You&#39;ll help define and harden the foundation for our training and serving stack: from storage and indexing strategies, to streaming and fanout, to backpressure and failure handling across services and regions.</p>
<p>You&#39;ll work closely with software engineering, data infra, and SRE partners to ensure our systems are observable, debuggable, and operable in production. If topics like IO scheduling and batching, lock-free or low-contention data structures, connection pooling, query planning, kernel and network tuning, on-disk layout and indexing, circuit-breaking, autoscaling, incident response, NixOS, Rust, and robust SLIs/SLOs sound interesting (even if it&#39;s just a subset), this role gives you a chance to apply that expertise to business-critical, high-leverage infrastructure at Pinterest scale.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Scale the decision making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>
</ul>
<ul>
<li>Improve the developer experience for the data science team</li>
</ul>
<ul>
<li>Upgrade our observability tooling</li>
</ul>
<ul>
<li>Make every deployment smooth as our infrastructure evolves</li>
</ul>
<p>What we&#39;re looking for:</p>
<ul>
<li>Deep understanding of Linux</li>
</ul>
<ul>
<li>Excellent writing skills</li>
</ul>
<ul>
<li>A systems-oriented mindset</li>
</ul>
<ul>
<li>Experience in high-performance software (RTB, HFT, etc.)</li>
</ul>
<ul>
<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>
</ul>
<ul>
<li>Strong observability instincts</li>
</ul>
<ul>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>
</ul>
<ul>
<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>
</ul>
<ul>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>
</ul>
<p>Nice-To-Haves:</p>
<ul>
<li>Reverse-engineering experience</li>
</ul>
<ul>
<li>Terraform, EKS, or MLOps experience</li>
</ul>
<ul>
<li>Python, Scala, or Zig experience</li>
</ul>
<ul>
<li>NixOS experience</li>
</ul>
<ul>
<li>Adtech or CTV experience</li>
</ul>
<ul>
<li>Experience deploying a distributed system across multiple clouds</li>
</ul>
<ul>
<li>Experience in hard real-time low-latency</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$123,696-$254,667 USD</Salaryrange>
      <Skills>Linux, high-performance software, software engineering, reliability, observability, AI, data structures, kernel and network tuning, on-disk layout and indexing, circuit-breaking, autoscaling, incident response, NixOS, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tvScientific</Employername>
      <Employerlogo>https://logos.yubhub.co/tvscientific.com.png</Employerlogo>
      <Employerdescription>tvScientific is a CTV advertising platform purpose-built for performance marketers, leveraging massive data and cutting-edge science to automate and optimize TV advertising.</Employerdescription>
      <Employerwebsite>https://www.tvscientific.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7782571</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2c095439-13b</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>
<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>
<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>
<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>
<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>
<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>
<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>
<p>This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>
<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>
<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>
<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>
<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Preferred Qualifications:</p>
<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>
<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>
<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>
<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>
<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>
<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>
<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>
<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>
<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>
<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>
<p>Certain roles may be eligible for benefits and other compensation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is an American multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-41/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
  </jobs>
</source>