{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/load-balancing"},"x-facet":{"type":"skill","slug":"load-balancing","display":"Load Balancing","count":18},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61e346b2-915"},"title":"Sr. Software Engineer, Inference","description":"<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p>Representative projects across the org:</p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>\n<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61e346b2-915","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5152348008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£225,000-£325,000 GBP","x-skills-required":["High-performance, large-scale distributed systems","Implementing and deploying machine learning systems at scale","Load balancing, request routing, or traffic management systems","LLM inference optimization, batching, and caching strategies","Kubernetes and cloud infrastructure (AWS, GCP)","Python or Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T16:00:17.377Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":225000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d4c3fc5-2ed"},"title":"Senior Software Engineer, Inference","description":"<p>About the role:</p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p>Strong candidates may also have experience with:</p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p>Representative projects across the org:</p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>\n<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>\n<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>\n<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p>How we&#39;re different:</p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p>Come work with us!</p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d4c3fc5-2ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4641822008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"€235,000-€295,000 EUR","x-skills-required":["High-performance, large-scale distributed systems","Implementing and deploying machine learning systems at scale","Load balancing, request routing, or traffic management systems","LLM inference optimization, batching, and caching strategies","Kubernetes and cloud infrastructure (AWS, GCP)","Python or Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:09.302Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":235000,"maxValue":295000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_82e9a289-022"},"title":"Senior Software Engineer  - Application Traffic team","description":"<p>As a Senior Software Engineer on the Application Traffic team, you will design and build the systems that power Databricks&#39; service-to-service communication across thousands of clusters in a multi-cloud environment. You will also help create abstractions that hide networking complexity from product teams, making connectivity, discovery, and reliability seamless by default.</p>\n<p>You&#39;ll work across three key areas that define Databricks&#39; networking stack:</p>\n<p>Ingress Control Plane: Build the control plane for Databricks&#39; global ingress layer. Enable programming of API gateways with static and dynamic endpoints, simplify service onboarding, and make it easy to expose APIs securely across clouds.</p>\n<p>Service-to-Service Communication: Design scalable mechanisms for service discovery and load balancing across thousands of clusters. Provide networking abstractions so product teams don&#39;t need to worry about underlying connectivity details.</p>\n<p>Overload Protection: Build intelligent rate limiting and admission control systems to protect critical services under high load. Ensure reliability and predictable performance for both customer-facing and internal workloads.</p>\n<p>We&#39;re looking for someone with a strong proficiency in one or more languages such as Java, Scala, Go, or C++, and experience with service-oriented architectures and large scale distributed systems. Familiarity with cloud platforms (AWS, Azure, GCP) and container/orchestration technologies (Kubernetes, Docker) is also required. A track record of shipping infrastructure that supports mission-critical workloads at scale is essential.</p>\n<p>The pay range for this role is $166,000-$225,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_82e9a289-022","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8183195002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","Go","C++","service-oriented architectures","large scale distributed systems","cloud platforms","container/orchestration technologies"],"x-skills-preferred":["service discovery","DNS","load balancing","Envoy"],"datePosted":"2026-04-18T15:57:51.589Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Go, C++, service-oriented architectures, large scale distributed systems, cloud platforms, container/orchestration technologies, service discovery, DNS, load balancing, Envoy","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_90312625-68e"},"title":"Senior Software Engineer - Traffic Management","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a member of the Traffic Management team, you will build and extend various traffic management and supporting systems. You will work closely with Network Engineering, Product Engineering, Network Strategy, and other teams to collaborate on ambitious initiatives to make the best use of Cloudflare’s global network.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Build and extend various traffic management and supporting systems</li>\n<li>Work closely with Network Engineering, Product Engineering, Network Strategy, and other teams</li>\n<li>Participate in all stages of the software development lifecycle</li>\n<li>Work with a wide range of technologies and programming languages</li>\n<li>Use AI-powered tools and systems as part of your daily workflow</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Minimum of 6 years of engineering experience with networking and/or distributed systems</li>\n<li>Systems-level programming experience in Go, Python, Rust, C, or C++</li>\n<li>Strong knowledge of networking protocols in Layers 3 and 4 of the OSI Model</li>\n<li>Knowledge of HTTP, TLS, and CDN networks</li>\n<li>Experience in implementing secure and highly-available distributed systems</li>\n<li>Strong ability to debug issues in complex systems</li>\n<li>Strong collaboration and communication skills across teams and functions</li>\n<li>Experience participating in an on-call rotation</li>\n<li>Willingness to adopt and integrate AI tools and systems into your engineering workflow</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_90312625-68e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7463839","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Python","Rust","C","C++","Networking protocols","HTTP","TLS","CDN networks","Distributed systems","AI-powered tools"],"x-skills-preferred":["Traffic engineering","Automated load balancing","Traffic prioritization","Statistical-analysis techniques","Control theory","TCP/IP","Internet routing"],"datePosted":"2026-04-18T15:57:34.283Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Rust, C, C++, Networking protocols, HTTP, TLS, CDN networks, Distributed systems, AI-powered tools, Traffic engineering, Automated load balancing, Traffic prioritization, Statistical-analysis techniques, Control theory, TCP/IP, Internet routing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fd6d120d-6ff"},"title":"Senior Platform Software Engineer, Transport","description":"<p>About Us</p>\n<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>\n<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>\n<p>In this role, you can expect to:</p>\n<ul>\n<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>\n</ul>\n<ul>\n<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>\n</ul>\n<ul>\n<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>\n</ul>\n<ul>\n<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>\n</ul>\n<ul>\n<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>\n</ul>\n<ul>\n<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>\n</ul>\n<ul>\n<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>\n</ul>\n<ul>\n<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>\n</ul>\n<p>You are a good fit if you have:</p>\n<ul>\n<li>Worked asynchronously as part of a fully-remote, distributed team</li>\n</ul>\n<ul>\n<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>\n</ul>\n<ul>\n<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>\n</ul>\n<ul>\n<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>\n</ul>\n<ul>\n<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>\n</ul>\n<ul>\n<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>\n</ul>\n<p>You&#39;ll have an edge if you have:</p>\n<ul>\n<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>\n</ul>\n<ul>\n<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>\n</ul>\n<ul>\n<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>\n</ul>\n<ul>\n<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>\n</ul>\n<ul>\n<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>\n</ul>\n<ul>\n<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>\n</ul>\n<p>Qualifications</p>\n<ul>\n<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>\n</ul>\n<ul>\n<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>\n</ul>\n<p><strong>Compensation &amp; Benefits</strong></p>\n<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>\n<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>\n<ul>\n<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>\n</ul>\n<ul>\n<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>\n</ul>\n<p>Equity Stake Benefits</p>\n<ul>\n<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>\n</ul>\n<ul>\n<li>Equity or comparable benefits may be offered depending on the legal limitations</li>\n</ul>\n<p><strong>Our Hiring Process (All Video Interviews)</strong></p>\n<ul>\n<li>Interview with a Talent Acquisition Partner (30 Mins)</li>\n</ul>\n<ul>\n<li>Technical Interview with Hiring Manager (60 Mins)</li>\n</ul>\n<ul>\n<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>\n</ul>\n<ul>\n<li>Final Values Interview (30 Mins)</li>\n</ul>\n<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fd6d120d-6ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$147,000 - $178,000 USD","x-skills-required":["Go","Python","Rust","Cloud infrastructure","Containerization","Infrastructure as Code","Cloud networking","Load balancing","DNS","VPCs","Proxies","Service mesh technologies"],"x-skills-preferred":["Cell-based or multi-tenant architectures","Building tooling for large-scale account migrations","Cloud networking tools","Multi-cloud strategies","Cross-cloud management and cost optimization"],"datePosted":"2026-04-18T15:57:06.377Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":147000,"maxValue":178000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ac45e205-e7d"},"title":"Engineering Manager, Inference Routing and Performance","description":"<p><strong>About the role\\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\\n\\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\\n\\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\\n\\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\\n\\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\\n\\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\\n\\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\\n\\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\\n\\nThe job is holding both.\\n\\n## Representative work:\\nThings the Inference Routing EM actually spends time on:\\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\\n\\n## What you&#39;ll do:\\nDrive system-level performance\\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\\n\\nDeliver reliably and operate cleanly\\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\\n\\nBuild and grow the team\\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\\n\\n## You may be a good fit if you:\\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\\n\\nStrong candidates may also have:\\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\\n- Led teams at supercomputing or hyperscaler infrastructure scale\\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\\n\\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\nAnnual Salary: $405,000-$485,000 USD</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ac45e205-e7d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155391008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["engineering management","distributed systems","load balancing","scheduling","cache-coherent distributed state","high-performance networking","machine learning systems"],"x-skills-preferred":["LLM inference serving","cluster schedulers","load balancers","service meshes","coordination planes","heterogeneous accelerator fleets","GPU/TPU/Trainium","GPU/accelerator programming","ML framework internals","OS-level performance debugging"],"datePosted":"2026-04-18T15:56:48.587Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, distributed systems, load balancing, scheduling, cache-coherent distributed state, high-performance networking, machine learning systems, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e394b0fa-2ba"},"title":"Staff Software Engineer, Inference","description":"<p><strong>About the role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Representative projects across the org</strong></p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p><strong>Deadline to apply</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Annual compensation range</strong></p>\n<p>The annual compensation range for this role is £325,000-£390,000 GBP.</p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>Why work with us?</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e394b0fa-2ba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097742008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":["high-performance distributed systems","machine learning systems","load balancing","request routing","traffic management","caching strategies"],"datePosted":"2026-04-18T15:50:52.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust, high-performance distributed systems, machine learning systems, load balancing, request routing, traffic management, caching strategies","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5a3deb2-908"},"title":"Senior Software Engineer, Inference","description":"<p>Job Title: Senior Software Engineer, Inference</p>\n<p>About the Role:</p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Significant software engineering experience, particularly with distributed systems</li>\n<li>Results-oriented, with a bias towards flexibility and impact</li>\n<li>Ability to pick up slack, even if it goes outside your job description</li>\n<li>Willingness to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p>Note: The salary range for this role is €235,000-€295,000 EUR per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5a3deb2-908","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4641822008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"€235,000-€295,000 EUR per year","x-skills-required":["High-performance, large-scale distributed systems","Implementing and deploying machine learning systems at scale","Load balancing, request routing, or traffic management systems","LLM inference optimization, batching, and caching strategies","Kubernetes and cloud infrastructure (AWS, GCP)","Python or Rust"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:39.086Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, IE"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":235000,"maxValue":295000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_db7b0f51-7df"},"title":"Senior Cloud Support Engineer","description":"<p>As a Senior Cloud Support Engineer at CoreWeave, you&#39;ll be on the front lines of a technological revolution, empowering our customers to harness the full potential of our advanced Kubernetes-powered HPC cloud infrastructure.</p>\n<p>You&#39;ll be hands-on, collaborating with engineers and researchers to resolve issues that impact high-profile, mission-critical applications and cutting-edge AI training workloads. Your contributions will be pivotal in ensuring seamless performance, reliability, and success for our customers, positioning you at the very core of transformative technologies reshaping industries worldwide at a company that is truly one of a kind.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Guide and mentor team members in developing their technical skills and troubleshooting capabilities across all disciplines supported by CoreWeave.</li>\n<li>Provide real-time feedback and coaching, reviewing tickets to identify opportunities for improvement and ensure quality assurance (QA).</li>\n<li>Develop and deliver training sessions to improve the team&#39;s proficiency and efficiency in resolving customer issues.</li>\n<li>Use technical expertise to investigate, debug, and resolve customer-impacting issues with the curiosity required to uncover and understand root causes.</li>\n<li>Maintain high customer satisfaction through swift, accurate, and empathetic high-touch support communications, as well as established best practices.</li>\n<li>Help design and implement troubleshooting best practices to ensure fast, accurate client resolutions.</li>\n<li>Contribute to refining processes, workflows, and playbooks for handling complex customer challenges.</li>\n<li>Serve as a technical escalation point for high-priority escalations or complex cases, modeling effective problem-solving approaches.</li>\n<li>Lead the creation of knowledge-sharing resources, including documentation, tutorials, and how-to guides.</li>\n<li>Enhance the support team&#39;s knowledge of CoreWeave&#39;s products and services through continuous learning initiatives.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>Have a Bachelor&#39;s degree in Information Science / Information Technology, Data Science, Computer Science, Engineering, Mathematics, Physics, or a related field, OR equivalent experience in a technical position</li>\n<li>At least 5+ years of experience in cloud support, systems administration, or related technical support-focused roles</li>\n<li>Proven hands-on work experience with Kubernetes</li>\n<li>Experience with networking, load balancing, storage volumes, observability, node management, High-Performance Computing (HPC), and Linux system administration</li>\n<li>Proven ability to mentor team members, foster technical growth, and improve team-wide capabilities through guidance and feedback</li>\n<li>Experience with observability tools such as Grafana</li>\n<li>Strong troubleshooting skills, with experience resolving complex customer issues and driving quality assurance through ticket reviews or similar processes</li>\n<li>Demonstrated success collaborating with cross-functional teams to refine workflows, implement best practices, and advocate for necessary tools or process changes</li>\n<li>Excellent written and verbal communication skills, with a track record of simplifying complex concepts for diverse audiences</li>\n<li>Strong technical presentation skills, with experience delivering precise, engaging, and informative presentations to technical and non-technical audiences, effectively showcasing complex concepts and solutions</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>CKA Certified</li>\n<li>Demonstrated experience with training, coaching, and creating onboarding materials.</li>\n<li>Operates in a fast-paced, global, 24/7 support team environment</li>\n<li>Ability to collaborate across different time zones</li>\n<li>On-site office environment, hybrid, or remote options depending on location</li>\n<li>Flexible to travel up to 10% (~25 days/year)</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_db7b0f51-7df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4568136006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,000 to $163,000","x-skills-required":["cloud support","systems administration","Kubernetes","networking","load balancing","storage volumes","observability","node management","High-Performance Computing (HPC)","Linux system administration"],"x-skills-preferred":["CKA Certified","training","coaching","onboarding materials","fast-paced global support team environment","collaboration across different time zones"],"datePosted":"2026-04-18T15:49:50.841Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud support, systems administration, Kubernetes, networking, load balancing, storage volumes, observability, node management, High-Performance Computing (HPC), Linux system administration, CKA Certified, training, coaching, onboarding materials, fast-paced global support team environment, collaboration across different time zones","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122000,"maxValue":163000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f3f1713-f74"},"title":"Systems Reliability Engineer","description":"<p>About Us</p>\n<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code.</p>\n<p>As a Systems Reliability Engineer on one of our Production Engineering teams, you&#39;ll be building the tools to help engineers deploy and operate the services that make Cloudflare work. Our mission is to provide a reliable, yet flexible, platform to help product teams release new software efficiently and safely.</p>\n<p>Core platforms we operate at Cloudflare include:</p>\n<ul>\n<li>Kubernetes</li>\n<li>Kafka</li>\n<li>Developer tools, CI, and CD systems</li>\n<li>Vault, Consul</li>\n<li>Terraform</li>\n<li>Temporal Workflows</li>\n<li>Cloudflare Developer Platform</li>\n</ul>\n<p>Responsibilities</p>\n<ul>\n<li>Build software that automates the operation of large, highly-available distributed systems.</li>\n<li>Ensure platform security, and guide security best practices</li>\n<li>Document your work and guide fellow developers towards optimal solutions</li>\n<li>Contribute back to the open source community</li>\n<li>Leave code better than we found it</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Recent career experience with Go or Python and at least 3 years experience in the role of full-time software engineer (any language). Rust is an added bonus.</li>\n<li>Experience with deploying and managing services using Docker on Linux</li>\n<li>A firm grasp of IP networking, load balancing and DNS</li>\n<li>Excellent debugging skills in a distributed systems environment</li>\n<li>Source control experience including branching, merging and rebasing (we use git)</li>\n<li>The ability to break down complex problems and drive towards a solution</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience with Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs on Kubernetes</li>\n<li>Operational experience deploying and managing large systems on bare metal</li>\n<li>Experience as a Site Reliability Engineer (SRE) for a large-scale company</li>\n<li>You have practical knowledge of web and systems performance, and extensively used tracing tools like ebpf and strace.</li>\n<li>Alerting and monitoring (Prometheus/Alert Manager), Configuration Management (salt)</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f3f1713-f74","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7453074","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Python","Docker","Linux","IP networking","load balancing","DNS","source control","git","Kubernetes","Kafka","Vault","Consul","Terraform","Temporal Workflows","Cloudflare Developer Platform"],"x-skills-preferred":["Rust","Deployment","StatefulSets","Persistent Volumes Claims","Ingresses","CRDs","ebpf","strace","Prometheus","Alert Manager","salt"],"datePosted":"2026-04-18T15:47:02.171Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Docker, Linux, IP networking, load balancing, DNS, source control, git, Kubernetes, Kafka, Vault, Consul, Terraform, Temporal Workflows, Cloudflare Developer Platform, Rust, Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs, ebpf, strace, Prometheus, Alert Manager, salt"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09124d22-4e0"},"title":"Senior Product Manager; Load Balancing","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a member of the growing team of product managers, your responsibilities include:</p>\n<ul>\n<li>Own the product vision for your area. Ensure that it aligns with the overall product and company vision.</li>\n</ul>\n<ul>\n<li>Be an expert on the domain, the market, the trends. Evangelize the vision constantly so all stakeholders are aligned, have context and understand where we are going.</li>\n</ul>\n<ul>\n<li>Represent the customer. Be the champion and voice of customers. Build intimate, personal customer relationships. Bring the customer&#39;s voice into the creation process.</li>\n</ul>\n<ul>\n<li>Manage the roadmap. Make tough tactical prioritization decisions while helping the company think long-term. Build trust with stakeholders by maintaining an understandable, accurate roadmap.</li>\n</ul>\n<ul>\n<li>Author use cases and prioritize requirements. Translate market observations and customer feedback into a prioritized product backlog. Author use cases based on specific real-world product applications and extrapolate detailed product requirements for the scenarios.</li>\n</ul>\n<ul>\n<li>Collaborate across teams. We win or lose as a team. Product managers play a critical role in creating alignment between engineering teams and stakeholders. A collaborative attitude is essential to the job.</li>\n</ul>\n<ul>\n<li>Measure success. Own the measures used to define success for your product. Success measures must be defined at the inception of a product and tracked throughout its lifecycle. Make measures visible to all stakeholders and interpret them into actionable conclusions and new hypotheses.</li>\n</ul>\n<ul>\n<li>Develop new opportunities. With your finger on the pulse of the market, the customers and the engineering teams, you are uniquely positioned to discover and develop new opportunities.</li>\n</ul>\n<p>Desirable Skills, Knowledge, and Experience: The ideal candidate is entrepreneurial-minded and thrives in a fast-paced and goal-driven environment. You have outstanding communication and collaboration skills and are able to work with a diverse group, get consensus, and drive the product forward. You are execution focused and emphasize getting things done while paying attention to important details.</p>\n<ul>\n<li>BS/MS in a technology- or business-related field</li>\n</ul>\n<ul>\n<li>Experience in building, configuring or using Load Balancing products, specifically with a focus with both public and private traffic load balancing.</li>\n</ul>\n<ul>\n<li>5+ years of product management or equivalent experience with demonstrated ability to discover opportunities, and then define and deliver products</li>\n</ul>\n<ul>\n<li>Exceptional communication, presentation, organizational and analytical skills</li>\n</ul>\n<ul>\n<li>Demonstrated ability to lead, drive consensus and deliver in a matrix organization with multiple stakeholders</li>\n</ul>\n<ul>\n<li>Must be able to define and manage complex process and/or product issues of a broad scope using independent judgment</li>\n</ul>\n<ul>\n<li>Experience balancing execution, agility, and culture at a fast growing business.</li>\n</ul>\n<ul>\n<li>Strong technical abilities. You are intimately familiar with modern software development practices used to build and deploy applications. You&#39;ve preferably been working full time on a software delivery team.</li>\n</ul>\n<ul>\n<li>Strong customer and stakeholder empathy. You must be not only the voice of the customer, but at various times the voice of marketing, finance, engineering, support and ops. You must be able to channel many points of view.</li>\n</ul>\n<ul>\n<li>Working knowledge of how the internet works at layers 3 through 7.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>A solid understanding of RESTful API design &amp; documentation.</li>\n</ul>\n<ul>\n<li>Ability to validate early ideas through quantitative and qualitative methods (A/B and multivariate testing, user testing, etc).</li>\n</ul>\n<p>Experience in user experience (UX) design a plus.</p>\n<ul>\n<li>Hands-on experience with command-line tools used for interacting with APIs, debugging and testing (e.g. curl, dig, etc.)</li>\n</ul>\n<ul>\n<li>Scripting/programming experience (Python, Go, etc.) and/or experience with web frameworks</li>\n</ul>\n<ul>\n<li>Domain expertise (e.g. as a practitioner or technology provider) in several of: HTTP content delivery, network performance, browser technologies and Internet security.</li>\n</ul>\n<ul>\n<li>Pricing strategy and revenue forecasting experience.</li>\n</ul>\n<p>Compensation:</p>\n<ul>\n<li>For Bay Area based hires: Estimated annual salary of $179,000 - $246,000</li>\n</ul>\n<ul>\n<li>For New York City, Washington, Washington D.C. and California (excluding Bay Area) based hires: Estimated annual salary of $172,000 - $237,000</li>\n</ul>\n<ul>\n<li>For Colorado based hires: Estimated annual salary of $156,000 - $215,000</li>\n</ul>\n<p>Equity:</p>\n<p>This role is eligible to participate in Cloudflare’s equity plan.</p>\n<p>Benefits:</p>\n<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09124d22-4e0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7409997","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$156,000 - $246,000","x-skills-required":["Load Balancing","DNS","RESTful API design","API documentation","Quantitative and qualitative methods","User experience (UX) design","Command-line tools","Scripting/programming","Web frameworks","Domain expertise","Pricing strategy","Revenue forecasting"],"x-skills-preferred":["Entrepreneurial mindset","Fast-paced and goal-driven environment","Communication and collaboration skills","Execution-focused","Strong technical abilities","Customer and stakeholder empathy","Working knowledge of how the internet works at layers 3 through 7"],"datePosted":"2026-04-18T15:45:06.571Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Load Balancing, DNS, RESTful API design, API documentation, Quantitative and qualitative methods, User experience (UX) design, Command-line tools, Scripting/programming, Web frameworks, Domain expertise, Pricing strategy, Revenue forecasting, Entrepreneurial mindset, Fast-paced and goal-driven environment, Communication and collaboration skills, Execution-focused, Strong technical abilities, Customer and stakeholder empathy, Working knowledge of how the internet works at layers 3 through 7","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":156000,"maxValue":246000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c8c0d024-434"},"title":"Senior Technical Support Engineer, Application Performance","description":"<p>At Cloudflare, we&#39;re looking for a Senior Technical Support Engineer to join our Customer Support Team. As a Senior Technical Support Engineer, you will be responsible for working with your peer Support engineers and cross-functional teams to tackle the toughest issues for our highest-profile customers.</p>\n<p>You&#39;ll gain hands-on experience with our products, learn the inner workings of Cloudflare&#39;s offerings, and continue to extend and deepen your understanding of foundational internet technologies. This role also provides opportunities to develop valuable technical and professional skills as well as job shadowing experiences to explore different roles within the company.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the toughest technical challenges and escalations within the scope of the Customer Support while maintaining high standards of individual and team performance</li>\n<li>Maintain clear, timely, and productive communications with customers via our ticketing system and phone support channels</li>\n<li>Become a go-to resource for technical and process knowledge inside and outside the Support organization</li>\n<li>Proactively identify and execute on opportunities for team growth and improvement</li>\n<li>Assist with training and mentoring other team members</li>\n<li>Help create and update technical documentation and runbooks</li>\n<li>Provide feedback on our product and potential improvements based on customer interactions</li>\n<li>Support the team in testing new releases and reporting bugs</li>\n<li>Perform other duties/projects as assigned</li>\n</ul>\n<p>Required Skills and Experience:</p>\n<ul>\n<li>5-7 years of experience working in a technical Customer Support role supporting large enterprise and SMB clients</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Self-driven and comfortable learning new technologies and systems on an ongoing basis</li>\n<li>Strong understanding of how the Internet works at OSI Model layers 3, 4, and 7</li>\n<li>Strong understanding of DNS, SSL/TLS, and HTTP(S) protocols</li>\n<li>Strong understanding of HTTP reverse proxying, caching, and load balancing</li>\n<li>Experience using Linux and associated command line tools, including curl, dig, traceroute, openssl, git, etc.</li>\n<li>Experience writing scripts in Bash, Python, JavaScript, or other scripting languages</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Prior experience with the Cloudflare platform, especially for personal projects/websites</li>\n<li>Experience troubleshooting network connectivity issues, BGP routing, and GRE tunnels</li>\n<li>Experience configuring network or application firewalls</li>\n<li>Experience with web development and/or web hosting</li>\n<li>Degrees or certifications in Computer Science, Information Technology, and related fields</li>\n<li>Fluency in Mandarin, Spanish, and/or Portuguese</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c8c0d024-434","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7612087","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["DNS","SSL/TLS","HTTP(S)","HTTP reverse proxying","caching","load balancing","Linux","curl","dig","traceroute","openssl","git","Bash","Python","JavaScript"],"x-skills-preferred":["Cloudflare platform","network connectivity issues","BGP routing","GRE tunnels","network firewalls","web development","web hosting","Computer Science","Information Technology"],"datePosted":"2026-04-18T15:43:55.627Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"DNS, SSL/TLS, HTTP(S), HTTP reverse proxying, caching, load balancing, Linux, curl, dig, traceroute, openssl, git, Bash, Python, JavaScript, Cloudflare platform, network connectivity issues, BGP routing, GRE tunnels, network firewalls, web development, web hosting, Computer Science, Information Technology"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_63af8568-789"},"title":"Engineering Manager, Inference Routing and Performance","description":"<p><strong>About the role\\nEvery request that hits Claude , from claude.ai, the API, our cloud partners, or internal research , passes through a routing decision. Not a generic load balancer round-robin, but a decision that accounts for what&#39;s already cached where, which accelerator the request runs best on, and what else is in flight across the fleet.\\n\\nGet it right and you extract meaningfully more throughput from the same hardware. Get it wrong and you burn capacity, miss latency SLOs, or shed load that shouldn&#39;t have been shed.\\n\\nThe Inference Routing team owns this layer. We build the cluster-level routing and coordination plane for Anthropic&#39;s inference fleet , the system that sits between the API surface and the inference engines themselves, making fleet-wide efficiency decisions in real time.\\n\\nAs Anthropic moves from &quot;many independent inference replicas&quot; toward &quot;a single warehouse-scale computer running a coordinated program,&quot; Dystro is the coordination layer. This is a deeply technical team.\\n\\nThe engineers here design custom load-balancing algorithms, build quantitative models of system performance, debug latency spikes that cross kernel, network, and framework boundaries, and reason carefully about cache placement across thousands of accelerators.\\n\\nThey work shoulder-to-shoulder with teams that write kernels and ML framework internals.\\n\\nThe EM for this team doesn&#39;t need to write kernels , but they do need the systems depth to make architectural calls, evaluate deeply technical candidates, and spot when a proposed optimization will have second-order effects on the fleet.\\n\\nYou&#39;ll inherit a strong team of distributed-systems engineers, and you&#39;ll be accountable for two things that pull in different directions: shipping system-level performance improvements that measurably increase fleet throughput and efficiency, and running the team operationally so that deploys are safe, incidents are rare, and the teams who depend on Dystro can plan around you with confidence.\\n\\nThe job is holding both.\\n\\n## Representative work:\\nThings the Inference Routing EM actually spends time on:\\n- Deciding whether a proposed routing algorithm change is worth the deploy risk, given the modeled throughput gain and the blast radius if it regresses\\n- Sequencing a quarter where KV-cache offload, a new coordination protocol, and two model launches all compete for the same engineers\\n- Working through a persistent tail-latency regression with the team , walking down from fleet-level metrics to per-replica behavior to a root cause in the networking stack\\n- Building the case (with numbers) to peer teams for why a cross-team protocol change unlocks the next efficiency win\\n- Running the post-incident review after a cache-eviction bug caused a capacity event, and turning it into process changes that stick\\n- Interviewing a candidate who has built schedulers at supercomputing scale, and deciding whether they&#39;d be additive to a team that already goes deep\\n\\n## What you&#39;ll do:\\nDrive system-level performance\\n- Own the technical roadmap for cluster-level inference efficiency , routing decisions, cache placement and eviction, cross-replica coordination, and the protocols that keep routing and inference engines in sync\\n- Partner with the inference engine, kernels, and performance teams to identify fleet-level throughput and latency wins, then turn those into shipped improvements with measurable results\\n- Build the team&#39;s habit of quantitative performance modeling: claim a win only when you can measure it, and know before you ship what the expected effect is\\n\\nDeliver reliably and operate cleanly\\n- Set technical strategy for how routing evolves across heterogeneous hardware (GPUs, TPUs, Trainium) and across all our serving surfaces\\n- Run the team&#39;s operational backbone , on-call rotation, incident response, postmortem review, deploy safety , so the team can ship aggressively without the system becoming fragile\\n- Create clarity at a seam: Inference Routing sits between the API surface, the inference engines, and the cloud deployment teams. You&#39;ll make sure commitments are realistic, dependencies are understood, and nobody is surprised\\n\\nBuild and grow the team\\n- Develop and retain a strong existing team, and hire against the bar described above: people who can go to the OS and framework level when the problem demands it, and who care about production reliability\\n- Coach engineers through a roadmap where priorities shift with model launches, new hardware, and scaling demands. We pair a lot here , you&#39;ll help make that collaboration pattern productive\\n- Pick up slack when it matters. This is a small team in a critical path; sometimes the EM is the one unblocking a stuck deploy or synthesizing a design debate\\n\\n## You may be a good fit if you:\\n- Have 5+ years of engineering management experience, ideally with at least part of that leading teams on critical-path production infrastructure at scale\\n- Have a deep systems background , load balancing, scheduling, cache-coherent distributed state, high-performance networking, or similar. You need enough depth to make architectural calls about routing and efficiency, and to evaluate candidates who go to the kernel and framework level\\n- Have shipped performance improvements in large-scale systems and can explain, with numbers, what the impact was\\n- Have run production infrastructure with real operational stakes: on-call, incident response, capacity events, deploy discipline\\n- Are results-oriented with a bias toward impact, and comfortable working in a space where throughput, latency, stability, and feature velocity all pull in different directions\\n- Build strong relationships across team boundaries , this is a seam role, and much of the job is making sure other teams can rely on yours\\n- Are curious about machine learning systems. You don&#39;t need an ML research background, but you should want to learn how transformer inference actually works and how that shapes the systems problems\\n\\nStrong candidates may also have:\\n- Experience with LLM inference serving , KV caching, continuous batching, request scheduling, prefill/decode disaggregation\\n- Background in cluster schedulers, load balancers, service meshes, or coordination planes at scale\\n- Familiarity with heterogeneous accelerator fleets (GPU/TPU/Trainium) and how hardware differences affect workload placement\\n- Experience with GPU/accelerator programming, ML framework internals, or OS-level performance debugging , enough to follow and evaluate the technical work, not necessarily to do it daily\\n- Led teams at supercomputing or hyperscaler infrastructure scale\\n- Led teams through rapid-growth periods where hiring and onboarding competed with roadmap delivery\\n\\nThe annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\\nAnnual Salary: $405,000-$485,000 USD</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_63af8568-789","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5155391008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["engineering management","deep systems background","load balancing","scheduling","cache-coherent distributed state","high-performance networking"],"x-skills-preferred":["LLM inference serving","cluster schedulers","load balancers","service meshes","coordination planes","heterogeneous accelerator fleets","GPU/TPU/Trainium","GPU/accelerator programming","ML framework internals","OS-level performance debugging"],"datePosted":"2026-04-18T15:37:38.038Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, deep systems background, load balancing, scheduling, cache-coherent distributed state, high-performance networking, LLM inference serving, cluster schedulers, load balancers, service meshes, coordination planes, heterogeneous accelerator fleets, GPU/TPU/Trainium, GPU/accelerator programming, ML framework internals, OS-level performance debugging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f95fe525-8fd"},"title":"Staff Software Engineer, Inference","description":"<p><strong>About the role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p><strong>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</strong></p>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Representative projects across the org</strong></p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p><strong>Deadline to apply: None. Applications will be reviewed on a rolling basis.</strong></p>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f95fe525-8fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5097742008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000 - £390,000GBP","x-skills-required":["performance optimization","distributed systems","large-scale service orchestration","intelligent request routing","LLM inference optimization","batching strategies","multi-accelerator deployments","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":["high-performance, large-scale distributed systems","implementing and deploying machine learning systems at scale","load balancing, request routing, or traffic management systems","caching strategies"],"datePosted":"2026-03-08T13:49:42.673Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust, high-performance, large-scale distributed systems, implementing and deploying machine learning systems at scale, load balancing, request routing, or traffic management systems, caching strategies","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ca53b3f7-f72"},"title":"Staff / Senior Software Engineer, Inference","description":"<p><strong>About the role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>\n<p>The team has a dual mandate: <strong>maximizing compute efficiency</strong> to serve our explosive customer growth, while <strong>enabling breakthrough research</strong> by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant software engineering experience, particularly with distributed systems</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Enjoy pair programming (we love to pair!)</li>\n<li>Want to learn more about machine learning systems and infrastructure</li>\n<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>High-performance, large-scale distributed systems</li>\n<li>Implementing and deploying machine learning systems at scale</li>\n<li>Load balancing, request routing, or traffic management systems</li>\n<li>LLM inference optimization, batching, and caching strategies</li>\n<li>Kubernetes and cloud infrastructure (AWS, GCP, Azure)</li>\n<li>Python or Rust</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>\n<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>\n<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>\n<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>\n<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>\n<li>Supporting inference for new model architectures</li>\n<li>Analyzing observability data to tune performance based on real-world production workloads</li>\n<li>Managing multi-region deployments and geographic routing for global customers</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research co</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ca53b3f7-f72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4951696008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$300,000 - $485,000 USD","x-skills-required":["distributed systems","machine learning systems","load balancing","request routing","traffic management","LLM inference optimization","Kubernetes","cloud infrastructure","Python","Rust"],"x-skills-preferred":["high-performance distributed systems","implementing and deploying machine learning systems at scale","structured sampling","prompt caching"],"datePosted":"2026-03-08T13:49:03.736Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, machine learning systems, load balancing, request routing, traffic management, LLM inference optimization, Kubernetes, cloud infrastructure, Python, Rust, high-performance distributed systems, implementing and deploying machine learning systems at scale, structured sampling, prompt caching","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6c3855c-59c"},"title":"Candidate Experience Manager","description":"<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>People</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>San Francisco$216K – $240K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to build safe artificial general intelligence (AGI) which benefits all of humanity. This long-term undertaking brings the world’s best scientists, engineers, and business professionals into one lab together to accomplish this.</p>\n<p>In pursuit of this mission, our Recruiting team is responsible for finding, assessing, and hiring exceptional talent across various fields and specialties. The team acts as consultative partners to our leaders across the organization, from technical managers up to our CEO, Sam Altman. We’re an established, collaborative team with a balance of structure, process, and ambitious problems to solve for.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a dynamic and experienced Candidate Experience Manager to lead our global coordination team. This individual will be responsible for overseeing the coordination of candidate interviews, ensuring a seamless and exceptional candidate experience, and scaling our coordination processes to meet the demands of a high-volume recruiting environment.</p>\n<p>This role is based in our San Francisco HQ office and not currently open to fully remote work.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Manage and lead our ever growing coordination team, providing guidance, support, and development opportunities.</li>\n</ul>\n<ul>\n<li>Oversee the coordination processes of all candidate interviews, including scheduling, communication, and logistics for both virtual and in-office interviews.</li>\n</ul>\n<ul>\n<li>Develop and implement scalable coordination processes to improve efficiency and consistency across the team.</li>\n</ul>\n<ul>\n<li>Influence and enhance candidate experience programs, ensuring every candidate receives a positive and professional experience throughout the recruitment process.</li>\n</ul>\n<ul>\n<li>Oversee high-volume recruiting activities, ensuring timely and accurate scheduling and coordination of interviews and candidate interactions.</li>\n</ul>\n<ul>\n<li>Develop, implement, and audit job posting and interview plan processes to ensure consistency and effectiveness.</li>\n</ul>\n<ul>\n<li>Collaborate with recruiters, hiring managers, and other stakeholders to ensure alignment and support throughout the recruitment lifecycle.</li>\n</ul>\n<ul>\n<li>Continuously assess and improve coordination practices, leveraging feedback and data to drive process improvements.</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>A good understanding of load balancing for coordinators</li>\n</ul>\n<ul>\n<li>Familiarity with Ashby ATS is a plus</li>\n</ul>\n<ul>\n<li>Proven experience in managing a global recruiting coordination team.</li>\n</ul>\n<ul>\n<li>Strong familiarity with in-office interviewing processes and best practices.</li>\n</ul>\n<ul>\n<li>Demonstrated background in building and scaling coordination processes within a high-volume recruiting environment.</li>\n</ul>\n<ul>\n<li>History of managing and influencing candidate experience programs, with a focus on delivering exceptional service.</li>\n</ul>\n<ul>\n<li>Excellent organizational and multitasking skills, with a strong attention to detail.</li>\n</ul>\n<ul>\n<li>Outstanding communication and interpersonal skills, with the ability to collaborate effectively with diverse teams.</li>\n</ul>\n<ul>\n<li>Well versed in Project Management Practices</li>\n</ul>\n<ul>\n<li>At least 5 years of people management experience.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6c3855c-59c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c8172b3d-7598-48b9-b730-ecf23aac308a","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216K – $240K","x-skills-required":["load balancing","Ashby ATS","project management","communication","interpersonal skills","organizational skills","multitasking","attention to detail"],"x-skills-preferred":["candidate experience","recruiting coordination","global coordination","high-volume recruiting","job posting","interview plan","recruiters","hiring managers","stakeholders"],"datePosted":"2026-03-06T18:39:42.566Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"load balancing, Ashby ATS, project management, communication, interpersonal skills, organizational skills, multitasking, attention to detail, candidate experience, recruiting coordination, global coordination, high-volume recruiting, job posting, interview plan, recruiters, hiring managers, stakeholders","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2410222e-b9e"},"title":"Executive Business Operations Program Manager","description":"<p><strong>Executive Business Operations Program Manager</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Programs</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$279K – $400K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Executive Business Operations Program Manager will design, unify, and scale the Executive Business Partner (EBP) function at OpenAI to drive operational excellence while preserving the flexibility required for executive-specific support models. Acting as a strategic partner to leadership, this role bridges vision and execution through strong systems thinking, operational rigor, and emotional intelligence.</p>\n<p><strong>About the Role</strong></p>\n<p>This role enables the EBP community through a matrix operating model, where EBPs remain embedded with their executive stakeholders and functional leadership while operating against shared, organization-wide expectations. The Executive Business Operations Program Manager drives cross-functional alignment by establishing common standards, operating rhythms, and pragmatic governance that create clarity, consistency, and measurable impact across teams. The model preserves executive autonomy and functional ownership while improving coordination, prioritization, and execution at scale.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and implement scalable systems, processes, and tools that improve clarity, efficiency, and consistency across the EBP function.</li>\n</ul>\n<ul>\n<li>Develop playbooks, documentation, and training materials to codify best practices and judgment standards.</li>\n</ul>\n<ul>\n<li>Establish operating cadences, governance forums, and escalation pathways to manage priorities and tradeoffs effectively.</li>\n</ul>\n<ul>\n<li>Serve as a trusted partner to leadership, translating strategic priorities into actionable and executable plans.</li>\n</ul>\n<ul>\n<li>Partner with the Organizational Lead on executive operations budgeting, headcount planning, and resource allocation.</li>\n</ul>\n<ul>\n<li>Lead capacity planning and workload balancing across EBPs to ensure sustainable coverage and responsiveness.</li>\n</ul>\n<ul>\n<li>Surface structural gaps, friction points, and opportunities for improvement within the executive support model.</li>\n</ul>\n<ul>\n<li>Provide data-driven insights that inform decision-making and assess the effectiveness of executive operations.</li>\n</ul>\n<ul>\n<li>Define success metrics and key performance indicators for executive support effectiveness.</li>\n</ul>\n<ul>\n<li>Drive continuous improvement through retrospectives, stakeholder feedback, and performance insights.</li>\n</ul>\n<ul>\n<li>Ensure support models scale appropriately as the organization grows and evolves</li>\n</ul>\n<ul>\n<li>Coach and develop EBPs through a structured leadership model that strengthens execution quality, judgment, and stakeholder effectiveness.</li>\n</ul>\n<ul>\n<li>Provide formal performance input to managers and executive stakeholders to support fair, consistent evaluation and growth.</li>\n</ul>\n<ul>\n<li>Lead and develop training for EBP team.</li>\n</ul>\n<ul>\n<li>Develop a plan for succession planning to strengthen the long-term health of the EBP function.</li>\n</ul>\n<ul>\n<li>Establish clear expectations for professional growth, career pathways, and standards of excellence.</li>\n</ul>\n<ul>\n<li>Foster a culture of trust, transparency, and high performance across executive operations.</li>\n</ul>\n<ul>\n<li>Model discretion, sound judgment, and proactive communication in high-stakes and sensitive situations.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>Provided leadership for Executive Business Partners (EBPs) across the organization.</li>\n</ul>\n<ul>\n<li>Established clear role expectations, operating norms, and standards of excellence across executive support.</li>\n</ul>\n<ul>\n<li>Maintained a federated model in which EBPs remain embedded with their executives, while benefiting from shared frameworks and oversight.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2410222e-b9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7460f4fb-f2e2-473f-9932-0c4d6d427b62","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$279K – $400K • Offers Equity","x-skills-required":["Executive Business Partner","Leadership","Systems thinking","Operational rigor","Emotional intelligence","Strategic planning","Budgeting","Headcount planning","Resource allocation","Capacity planning","Workload balancing","Data-driven insights","Performance metrics","Continuous improvement","Training and development","Succession planning","Professional growth","Career pathways","Standards of excellence","Discretion","Sound judgment","Proactive communication"],"x-skills-preferred":["Project management","Process improvement","Change management","Communication","Collaboration","Problem-solving","Analytical skills","Business acumen","Technical skills"],"datePosted":"2026-03-06T18:36:14.661Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"Executive Business Partner, Leadership, Systems thinking, Operational rigor, Emotional intelligence, Strategic planning, Budgeting, Headcount planning, Resource allocation, Capacity planning, Workload balancing, Data-driven insights, Performance metrics, Continuous improvement, Training and development, Succession planning, Professional growth, Career pathways, Standards of excellence, Discretion, Sound judgment, Proactive communication, Project management, Process improvement, Change management, Communication, Collaboration, Problem-solving, Analytical skills, Business acumen, Technical skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":279000,"maxValue":400000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f8b5ed43-df9"},"title":"Senior Architect","description":"<p>We Are EA</p>\n<p>We’re EA—the world’s largest video game publisher. You’re probably familiar with many of our titles—Madden, FC, Apex Legends, The Sims, Need for Speed, Dead Space, Battlefield and Star Wars, to name a few. But maybe you don’t know how we’re committed to creating games for every platform—from social to mobile to console —to give our players that anytime, anywhere access they demand. What does that mean for you? It means more opportunities to unleash your computing genius.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Architect, design &amp; build massively scalable services, solutions, and platforms with stellar performance.</li>\n<li>Provide technology roadmap &amp; direction for platforms, products, and features.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Bachelor of Technology or Engineering from a reputed institute.</li>\n<li>12+ years of experience working in Java, J2EE technologies and 8+ years of experience in building highly scalable distributed systems using Microservices architecture.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f8b5ed43-df9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Senior-Architect/211382","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","J2EE technologies","Microservices architecture"],"x-skills-preferred":["Spring","Spring Boot","gRPC","Load Balancing","Caching","Message Buses","AWS Cloud","Kubernetes","Docker"],"datePosted":"2026-01-19T04:03:42.932Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, J2EE technologies, Microservices architecture, Spring, Spring Boot, gRPC, Load Balancing, Caching, Message Buses, AWS Cloud, Kubernetes, Docker"}]}