{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/workloads"},"x-facet":{"type":"skill","slug":"workloads","display":"Workloads","count":46},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_792cef6b-cf8"},"title":"Transaction Principal","description":"<p>As a Transaction Principal for Australia at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our Australian data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>\n<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems in the region , you&#39;ll bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams, and partner closely with our Compute Markets team who own the Australia market strategy and government relationships. This is not an established leasing org; you&#39;ll be building process alongside execution.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the RFP and commercial sourcing process for Australian data center deals, managing developer outreach, proposal evaluation, and competitive selection</li>\n<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>\n<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>\n<li>Serve as project manager for cross-functional stakeholder engagement , coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance to ensure alignment prior to lease execution</li>\n<li>Act as the single point of contact for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing status</li>\n<li>Develop and maintain transaction timelines, tracking critical-path items and proactively identifying risks that could impact deal closure</li>\n<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>\n<li>Manage complex digital infrastructure development activities to a construction-ready state, through a developer or directly</li>\n<li>Marry the right projects, capital stacks, and developers at the right stages</li>\n<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint in region</li>\n<li>Partner with the Compute Markets Manager to prioritize sites and counterparties, and feed deal learnings back into Australia market strategy</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>\n<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>\n<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>\n<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>\n<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>\n<li>Have experience working in or with Australian markets, with knowledge of the local real estate and development landscape</li>\n<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>\n<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>\n<li>Demonstrate exceptional communication skills and can coordinate effectively across time zones with HQ-based teams and external partners</li>\n</ul>\n<p>It&#39;s a bonus if you:</p>\n<ul>\n<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>\n<li>Come from the development side of the industry rather than traditional brokerage/leasing , you understand how DC development works and how value is created (yield-on-cost, cap rates, development fees)</li>\n<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>\n<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>\n<li>Understand utility coordination, power procurement, or energy considerations in data center transactions, particularly in the Australian context (NEM, grid connection)</li>\n<li>Have relationships within the Australian data center developer and broker ecosystem</li>\n<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>\n<li>Have experience in high-growth technology companies managing infrastructure expansion</li>\n</ul>\n<p>Logistics</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_792cef6b-cf8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5154345008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["transaction management","commercial real estate","data center leasing","infrastructure procurement","negotiation","project management","RFP processes","competitive sourcing","Australian markets","local real estate and development landscape","communication skills"],"x-skills-preferred":["data center or hyperscale infrastructure transactions","DC development","yield-on-cost","cap rates","development fees","technical requirements for AI/ML workloads","power density","cooling","network connectivity","utility coordination","power procurement","energy considerations","Australian data center developer and broker ecosystem","corporate development","strategic partnerships","infrastructure investment","high-growth technology companies"],"datePosted":"2026-04-18T15:58:10.532Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sydney, Australia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"transaction management, commercial real estate, data center leasing, infrastructure procurement, negotiation, project management, RFP processes, competitive sourcing, Australian markets, local real estate and development landscape, communication skills, data center or hyperscale infrastructure transactions, DC development, yield-on-cost, cap rates, development fees, technical requirements for AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations, Australian data center developer and broker ecosystem, corporate development, strategic partnerships, infrastructure investment, high-growth technology companies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0f05d190-fce"},"title":"Sr. Manager, Field Engineering - Digital Native Business","description":"<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the fastest-growing tech companies.</p>\n<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>\n<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>\n<p>A key responsibility of this role is to hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>\n</ul>\n<ul>\n<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers.</li>\n</ul>\n<ul>\n<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle.</li>\n</ul>\n<ul>\n<li>Travel to customer sites for executive sessions, technical workshops, and building relationships.</li>\n</ul>\n<ul>\n<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team.</li>\n</ul>\n<ul>\n<li>Stay current with emerging Data and AI trends in the digital native tech sector.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>7+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning).</li>\n</ul>\n<ul>\n<li>5+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment.</li>\n</ul>\n<ul>\n<li>A history of building a territory, growing strategic accounts, and exceeding targets.</li>\n</ul>\n<ul>\n<li>Inspiring a team vision about the unique nature of the digital natives business.</li>\n</ul>\n<ul>\n<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts.</li>\n</ul>\n<ul>\n<li>Experience owning executive alignment in accounts that guide strategic decisions.</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Local Pay Range $192,100-$264,175 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0f05d190-fce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8496009002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,100-$264,175 USD","x-skills-required":["data warehousing","big data","cloud infrastructure","machine learning","technical product","digital native customers","data, analytical, and AI workloads","Solutions Architects","customer-facing teams","hiring, onboarding, and supporting team members","high-growth environment","executive alignment","accounts that guide strategic decisions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:49.724Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Colorado; Remote - California; Remote - Oregon; Remote - Washington"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, big data, cloud infrastructure, machine learning, technical product, digital native customers, data, analytical, and AI workloads, Solutions Architects, customer-facing teams, hiring, onboarding, and supporting team members, high-growth environment, executive alignment, accounts that guide strategic decisions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192100,"maxValue":264175,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9af8d812-df8"},"title":"AI Infrastructure Engineer","description":"<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>\n<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>\n<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>\n<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>\n<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>\n<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>\n<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>\n<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>\n<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>\n<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>\n<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>\n<p>You have hands-on experience with one or more of the following:</p>\n<p>Model training (especially transformers and LLMs).</p>\n<p>Model inference at scale (again, especially transformers and LLMs).</p>\n<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>\n<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>\n<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>\n<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>\n<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>\n<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>\n<p>Competitive salary, annual bonus and equity</p>\n<p>Regular compensation reviews - we reward great work!</p>\n<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>\n<p>Generous paid time off above statutory minimum</p>\n<p>Hybrid working</p>\n<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>\n<p>Fun events for employees, friends, and family!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9af8d812-df8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7824142","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["model training","model inference","low-level GPU work","CUDA","Triton","Python","Ruby","Java","Go"],"x-skills-preferred":["experience at AI native companies","running training or inference workloads on Kubernetes","AWS","cloud providers","production experience with Python in ML or infrastructure contexts"],"datePosted":"2026-04-18T15:57:33.379Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_588dfb0e-611"},"title":"Solutions Architect - Kubernetes","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>\n<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>\n<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>\n<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>\n<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>\n<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>\n<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>\n<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>\n<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>\n<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>\n<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>\n<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>\n<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_588dfb0e-611","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4557835006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $220,000","x-skills-required":["Kubernetes","Cloud Computing","High-Performance Compute (HPC)","Distributed Systems","Cloud Infrastructure","Scalable Solutions","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)","Slurm","Kubernetes Clusters"],"x-skills-preferred":["Code Contributions to Open-Source Inference Frameworks","Scripting and Automation Related to Kubernetes Clusters and Workloads","Building Solutions Across Multi-Cloud Environments","Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures"],"datePosted":"2026-04-18T15:57:29.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ce19e8c0-163"},"title":"Transaction Manager","description":"<p>As a Transaction Manager at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>\n<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems, requiring you to bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help identify data center capacity opportunities and options through management of network relationships across data center developer, broker, and power contacts.</li>\n<li>Lead the RFP and commercial sourcing process for specific data center deals, managing developer outreach, proposal evaluation, and competitive selection processes</li>\n<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>\n<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>\n<li>Serve as project manager for cross-functional stakeholder engagement, coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance organization to ensure alignment prior to lease execution</li>\n<li>Act as the single point of contact (SPOC) for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing process status</li>\n<li>Develop and maintain transaction timelines, tracking critical path items and proactively identifying risks that could impact deal closure</li>\n<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint</li>\n<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>\n<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>\n<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>\n<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>\n<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>\n<li>Demonstrate exceptional communication skills, able to serve as an effective liaison between internal stakeholders and external partners</li>\n<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>\n<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>\n<li>Have a collaborative mindset and can build trust with diverse stakeholder groups across the organization</li>\n</ul>\n<p>It&#39;s a bonus if you:</p>\n<ul>\n<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>\n<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>\n<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>\n<li>Possess familiarity with data center developer ecosystems and market dynamics</li>\n<li>Have experience in high-growth technology companies managing infrastructure expansion</li>\n<li>Understand utility coordination, power procurement, or energy considerations in data center transactions</li>\n<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>\n</ul>\n<p>The annual compensation range for this role is $365,000-$435,000 USD.</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ce19e8c0-163","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5099080008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$435,000 USD","x-skills-required":["transaction management","commercial real estate","data center leasing","infrastructure procurement","RFP processes","competitive sourcing","project management","negotiation skills","term sheets","LOIs","commercial agreements"],"x-skills-preferred":["data center or hyperscale infrastructure transactions","AI/ML workloads","power density","cooling","network connectivity","utility coordination","power procurement","energy considerations"],"datePosted":"2026-04-18T15:56:38.725Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"transaction management, commercial real estate, data center leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, term sheets, LOIs, commercial agreements, data center or hyperscale infrastructure transactions, AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":435000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_24176cb8-311"},"title":"Member of Technical Staff - Compute Infrastructure","description":"<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>\n<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>\n<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>\n<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>\n<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_24176cb8-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052040007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent)","Strong proficiency in systems programming languages such as C/C++ and Rust","Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering","Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale"],"x-skills-preferred":["Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads","Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale)","Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute","Familiarity with performance tools, tracing, and debugging in production distributed environments"],"datePosted":"2026-04-18T15:55:50.213Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e68e5c3b-1e2"},"title":"Lakebase Account Executive","description":"<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>\n<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>\n<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>\n<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>\n<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>\n<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>\n<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>\n<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>\n<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>\n<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>\n<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>\n<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>\n<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>\n<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>\n<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>\n<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>\n<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>\n<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>\n<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>\n<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>\n<p>Bachelor’s degree or equivalent practical experience.</p>\n<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>\n<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>\n<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>\n<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>\n<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>\n<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e68e5c3b-1e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8449848002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","operational databases","OLTP workloads","transactional cloud database services","data platforms","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics","AI-native applications","agent-driven applications","low-latency","highly scalable operational data services"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:06.106Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_96d05ee1-799"},"title":"Staff Software Engineer, Cluster Orchestration","description":"<p><strong>Job Description</strong></p>\n<p>CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence.</p>\n<p>Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.</p>\n<p>Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.</p>\n<p><strong>About the Role</strong></p>\n<p>As part of the Cluster Orchestration team, you will play a key role in advancing CoreWeave&#39;s orchestration platform including SUNK (Slurm on Kubernetes) and beyond, our Kubernetes-native foundation that powers AI training and inference at scale.</p>\n<p>This is an opportunity to help shape one of the most critical layers of the AI cloud: ensuring workloads run seamlessly, reliably, and efficiently across massive GPU clusters.</p>\n<p>By building the systems that eliminate infrastructure bottlenecks and create new orchestration capabilities, you will directly empower customers to innovate faster and push the boundaries of what&#39;s possible with AI.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>As a Staff Engineer, you will be a technical leader shaping the long-term strategy for CoreWeave&#39;s orchestration platform.</p>\n<p>You&#39;ll define architectural direction, own critical parts of the orchestration platform and other managed services, and drive cross-org initiatives in scheduling, quota enforcement, and scaling at hyperscale.</p>\n<p>You&#39;ll mentor senior engineers, establish org-wide best practices in reliability and observability, and ensure CoreWeave&#39;s orchestration layer evolves to meet the demands of next-generation AI workloads.</p>\n<p><strong>Who You Are</strong></p>\n<ul>\n<li>8+ years of software engineering experience.</li>\n</ul>\n<ul>\n<li>Proven track record designing and operating large-scale distributed systems in production.</li>\n</ul>\n<ul>\n<li>Deep expertise in Slurm/Kubernetes internals and cloud-native development.</li>\n</ul>\n<ul>\n<li>Advanced proficiency in Go and distributed systems design and cloud-native development.</li>\n</ul>\n<ul>\n<li>Experience setting technical direction and influencing cross-team architecture.</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in CS, EE, or related field.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Familiarity with orchestration and workflow technologies such as Ray, Kubeflow, Kueue, Istio, Knative, or Argo Workflows</li>\n</ul>\n<ul>\n<li>Deep expertise in Slurm/Kubernetes internals.</li>\n</ul>\n<ul>\n<li>Experience with distributed workloads, GPU-based applications, or ML pipelines.</li>\n</ul>\n<ul>\n<li>Knowledge of scheduling concepts like quota enforcement, pre-emption, and scaling strategies.</li>\n</ul>\n<ul>\n<li>Exposure to reliability practices including SLOs, alarms, and post-incident reviews.</li>\n</ul>\n<ul>\n<li>Experience with AI infrastructure and workloads (ML training, inference, or HPC).</li>\n</ul>\n<ul>\n<li>Ability to mentor senior engineers and elevate organizational standards.</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>\n<p>As we get set for take off, the growth opportunities within the organization are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p><strong>Salary and Benefits</strong></p>\n<p>The base salary range for this role is $185,000 to $275,000.</p>\n<p>The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>\n<p>We strive for both market alignment and internal equity when determining compensation.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p><strong>What We Offer</strong></p>\n<p>The range we&#39;ve posted represents the typical compensation range for this role.</p>\n<p>To determine actual compensation, we review the market rate for each candidate which can include a variety of factors.</p>\n<p>These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_96d05ee1-799","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4658801006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$185,000 to $275,000","x-skills-required":["software engineering","distributed systems","Slurm","Kubernetes","cloud-native development","Go","scheduling","quota enforcement","scaling strategies","reliability practices","SLOs","alarms","post-incident reviews","AI infrastructure","workloads","ML training","inference","HPC"],"x-skills-preferred":["orchestration and workflow technologies","Ray","Kubeflow","Kueue","Istio","Knative","Argo Workflows"],"datePosted":"2026-04-18T15:53:28.322Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, distributed systems, Slurm, Kubernetes, cloud-native development, Go, scheduling, quota enforcement, scaling strategies, reliability practices, SLOs, alarms, post-incident reviews, AI infrastructure, workloads, ML training, inference, HPC, orchestration and workflow technologies, Ray, Kubeflow, Kueue, Istio, Knative, Argo Workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":185000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6421dea-6e3"},"title":"Strategic Hunter Account Executive - Lakebase","description":"<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>\n<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>\n<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>\n<p><strong>The impact you will have</strong></p>\n<ul>\n<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>\n</ul>\n<ul>\n<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>\n</ul>\n<ul>\n<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>\n</ul>\n<ul>\n<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>\n</ul>\n<ul>\n<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>\n</ul>\n<ul>\n<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>\n</ul>\n<ul>\n<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>\n</ul>\n<ul>\n<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>\n</ul>\n<p><strong>What success looks like in this role</strong></p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<ul>\n<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>\n</ul>\n<ul>\n<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>\n</ul>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<ul>\n<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>\n</ul>\n<ul>\n<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>\n</ul>\n<ul>\n<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>\n</ul>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p><strong>What we look for</strong></p>\n<ul>\n<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>\n</ul>\n<ul>\n<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>\n</ul>\n<ul>\n<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>\n</ul>\n<ul>\n<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>\n</ul>\n<ul>\n<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>\n</ul>\n<ul>\n<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>\n</ul>\n<ul>\n<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>\n</ul>\n<ul>\n<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree or equivalent practical experience.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>\n</ul>\n<ul>\n<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>\n</ul>\n<ul>\n<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>\n</ul>\n<ul>\n<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>\n</ul>\n<ul>\n<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>\n</ul>\n<ul>\n<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p><strong>Our Commitment to Diversity and Inclusion</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6421dea-6e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477547002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","operational databases","Postgres","MySQL","cloud-native DBaaS","data/AI infrastructure","technical buyers","business leaders","modern data and application architectures","cloud-native services","microservices","event-driven systems","AI and analytics strategies","technical stakeholders","business stakeholders","value selling skills","discovering pain","building a business case","quantified outcomes","communication","storytelling","negotiation skills"],"x-skills-preferred":["OLTP workloads","transactional cloud database services","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics use cases","AI-native applications","agent-driven applications","high-growth environments","category-creating environments","partner collaborations","ISV collaborations"],"datePosted":"2026-04-18T15:52:47.849Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India; Mumbai, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a7d182d-c49"},"title":"Solutions Architect - Kubernetes","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>\n<p>Your primary responsibility will be to serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>\n<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>\n<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>\n<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>\n<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>\n<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>\n<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p>To be successful in this role, you will need to have a proven track record of working as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>\n<p>You will also need to have fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>\n<p>In addition, you will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>\n<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a7d182d-c49","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649036006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $225,000 SGD","x-skills-required":["Cloud computing concepts","Kubernetes solutions","High-performance compute (HPC) environments","Distributed systems","Cloud infrastructure"],"x-skills-preferred":["Code contributions to open-source inference frameworks","Scripting and automation related to Kubernetes clusters and workloads","Building solutions across multi-cloud environments","Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures"],"datePosted":"2026-04-18T15:52:11.835Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing concepts, Kubernetes solutions, High-performance compute (HPC) environments, Distributed systems, Cloud infrastructure, Code contributions to open-source inference frameworks, Scripting and automation related to Kubernetes clusters and workloads, Building solutions across multi-cloud environments, Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2ab9c635-07a"},"title":"Operations Engineer, Fleet Reliability","description":"<p>The Fleet Reliability Operations team is responsible for the day-to-day provisioning, management, and uptime of CoreWeave&#39;s ever-expanding fleet of server nodes. This team plays a central role in CoreWeave&#39;s growth strategy, configuring, updating, and remotely troubleshooting our highest-tier supercomputing clusters and their networking, delivery platforms, and tools dependencies.</p>\n<p>We are seeking curious, creative, and persistent problem solvers to join our Fleet Reliability Operations team to help drive batches of server nodes through our provisioning and validation processes while efficiently and effectively troubleshooting node or cluster problems as they arise.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Configuring and maintaining large-scale high-performance supercomputing clusters running state-of-the-art GPUs</li>\n<li>Troubleshooting hardware and software issues; escalating and coordinating as needed with data center, network, hardware, and platform teams to drive resolution</li>\n<li>Monitoring and analyzing system performance and taking appropriate remediation actions for cloud health</li>\n<li>Approaching work with flexibility and optimism, anticipating shifting business and technical priorities</li>\n<li>Creating and maintaining documentation of team processes, knowledge, and best practices for system management</li>\n<li>Thinking critically about day-to-day work and working collaboratively to improve team processes and efficiency</li>\n</ul>\n<p>As a member of our team, you will be part of a dynamic and fast-paced environment where you will have the opportunity to grow and develop your skills. We offer a competitive salary range of $83,000 to $110,000, as well as a comprehensive benefits package, including medical, dental, and vision insurance, company-paid life insurance, and flexible PTO.</p>\n<p>If you are a motivated and detail-oriented individual who is passionate about working with cutting-edge technology, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2ab9c635-07a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4617382006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$83,000 to $110,000","x-skills-required":["Linux system administration","Troubleshooting hardware and software issues","System maintenance tasks","Scripting languages (bash, python, powershell, etc)","Grafana, Prometheus, promsql queries or similar observability platforms"],"x-skills-preferred":["Kubernetes administration","HPC - administering GPU-related workloads","Data center environments including server racks, HVAC systems, fiber trays"],"datePosted":"2026-04-18T15:51:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY /Plano, TX /  Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux system administration, Troubleshooting hardware and software issues, System maintenance tasks, Scripting languages (bash, python, powershell, etc), Grafana, Prometheus, promsql queries or similar observability platforms, Kubernetes administration, HPC - administering GPU-related workloads, Data center environments including server racks, HVAC systems, fiber trays","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":83000,"maxValue":110000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1ba5c28-9ce"},"title":"Senior Software Engineer, Observability","description":"<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>\n<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>\n<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>\n<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>\n<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>\n<p>The base salary range for this role is $139,000 to $220,000.</p>\n<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1ba5c28-9ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4554201006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $220,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","Helm","YAML-based configurations","automated testing","progressive release strategies","on-call rotations"],"x-skills-preferred":["designing, operating, or scaling logging, metrics, or tracing platforms","data streaming systems for observability pipelines","automating infrastructure provisioning","OpenTelemetry for unified telemetry collection and instrumentation","exposure to modern AI workloads and GPU-based infrastructure"],"datePosted":"2026-04-18T15:51:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9166d234-4c5"},"title":"Solutions Architect - HPC/AI/ML","description":"<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers establish their Kubernetes environment, develop proofs of concept, onboard, and optimise workloads. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on AI/ML workloads within high-performance compute (HPC) environments.</p>\n<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>\n<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>\n<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimisation and suggesting suitable solutions.</p>\n<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>\n<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9166d234-4c5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649044006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $225,000 SGD","x-skills-required":["cloud computing concepts","architecture","technologies","NVIDIA GPUs","Infiniband","NVIDIA Collective Communications Library (NCCL)","Slurm","Kubernetes"],"x-skills-preferred":["code contributions to open-source inference frameworks","scripting and automation related to AI/ML workloads","building solutions across multi-cloud environments","client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures"],"datePosted":"2026-04-18T15:51:30.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing concepts, architecture, technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes, code contributions to open-source inference frameworks, scripting and automation related to AI/ML workloads, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimisation, or advanced model-server architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09c520cf-f62"},"title":"Systems Engineer, Kernel","description":"<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>\n<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>\n<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>\n<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>\n<p>Our Team&#39;s Stack:</p>\n<ul>\n<li>Python, Go, bash/sh, C</li>\n</ul>\n<ul>\n<li>Prometheus, Victoria Metrics, Grafana</li>\n</ul>\n<ul>\n<li>Linux Kernel (custom build), Ubuntu</li>\n</ul>\n<ul>\n<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>\n</ul>\n<ul>\n<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>\n</ul>\n<p>Focus Areas:</p>\n<ul>\n<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>\n</ul>\n<ul>\n<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>\n</ul>\n<ul>\n<li>Stack-Wide Support – Ensure kernel support and stability across:</li>\n</ul>\n<ul>\n<li>Virtualization (KubeVirt, QEMU, vFIO)</li>\n</ul>\n<ul>\n<li>Container runtimes (containerd, nydus, kubelet)</li>\n</ul>\n<ul>\n<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>\n</ul>\n<ul>\n<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>\n</ul>\n<ul>\n<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>\n</ul>\n<p>About the role:</p>\n<ul>\n<li>Triage and fix kernel crashes and performance regressions.</li>\n</ul>\n<ul>\n<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>\n</ul>\n<ul>\n<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>\n</ul>\n<ul>\n<li>Implement diagnostics and tooling for kernel-level observability.</li>\n</ul>\n<ul>\n<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>\n</ul>\n<ul>\n<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>\n</ul>\n<ul>\n<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>\n</ul>\n<ul>\n<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>\n</ul>\n<ul>\n<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>\n</ul>\n<ul>\n<li>Experience working with kernel modules, drivers, and subsystems.</li>\n</ul>\n<ul>\n<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Contributions to the Linux kernel or related open-source projects.</li>\n</ul>\n<ul>\n<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>\n</ul>\n<ul>\n<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>\n</ul>\n<ul>\n<li>GPU/DPU bring-up and driver experience.</li>\n</ul>\n<ul>\n<li>Experience in HPC or large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Familiarity with QA/QE best practices</li>\n</ul>\n<ul>\n<li>Experience working in Cloud environments</li>\n</ul>\n<ul>\n<li>Experience as a software engineer writing large-scale applications</li>\n</ul>\n<ul>\n<li>Experience with machine learning is a huge bonus</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09c520cf-f62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4599319006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Linux kernel engineering","Systems-level development","C programming","Kernel modules","Drivers","Subsystems","Kernel debugging","Upstream contributions","Stack-wide support","Virtualization","Container runtimes","HPC/AI workloads","Kernel-hardware enablement","Performance & stability"],"x-skills-preferred":["Contributions to the Linux kernel","Networking stack expertise","GPU/DPU bring-up and driver experience","Experience in HPC or large-scale distributed systems","QA/QE best practices","Cloud environments","Software engineer writing large-scale applications","Machine learning"],"datePosted":"2026-04-18T15:51:21.252Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance & stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_372999e8-579"},"title":"Senior Software Engineer II, AI Workload Orchestration","description":"<p>As a Senior Software Engineer II on the AI Workload Orchestration team, you will help build and operate CoreWeave&#39;s Kubernetes-native platform for admitting, scheduling, and operating AI workloads at scale.</p>\n<p>This platform integrates multiple orchestration and scheduling frameworks such as Kueue, Volcano, and Ray to support modern AI training and inference workflows. It complements SUNK (Slurm on Kubernetes) by providing a Kubernetes-first, cloud-native orchestration layer with deep platform integration.</p>\n<p>You will own meaningful components of the platform, drive reliability and performance improvements, and help scale the system as customer demand and workload complexity continue to grow.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and operate Kubernetes-native services for AI workload orchestration and scheduling</li>\n<li>Own one or more platform components end-to-end, including design, implementation, testing, and on-call support</li>\n<li>Improve scheduling latency, cluster utilization, and workload reliability through metrics-driven engineering</li>\n<li>Contribute to architectural discussions across services and influence design decisions within the platform</li>\n<li>Work closely with adjacent teams (CKS, infrastructure, managed inference) to ensure clean interfaces and integrations</li>\n<li>Mentor junior engineers and raise the quality bar for code, design, and operations</li>\n</ul>\n<p>About the role:</p>\n<ul>\n<li>5–8 years of professional software engineering experience in distributed systems, cloud infrastructure, or platform engineering</li>\n<li>Strong experience building production systems in Go (Python or C++ a plus)</li>\n<li>Solid understanding of Kubernetes fundamentals, APIs, controllers, and operating services in production</li>\n<li>Experience working with scheduling, resource management, or quota-based systems</li>\n<li>Proven ability to improve system reliability and performance using data and operational metrics</li>\n<li>Comfortable owning services in production and participating in on-call rotations</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience with Kubernetes-native orchestration frameworks such as Kueue, Volcano, Ray, Kubeflow, or Argo Workflows</li>\n<li>Familiarity with GPU-based workloads, ML training, or inference pipelines</li>\n<li>Knowledge of scheduling concepts such as quota enforcement, pre-emption, and backfilling</li>\n<li>Experience with reliability practices including SLOs, alerting, and incident response</li>\n<li>Exposure to AI infrastructure, HPC, or large-scale distributed compute environments</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_372999e8-579","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4647595006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Kubernetes","Go","Distributed systems","Cloud infrastructure","Platform engineering","Scheduling","Resource management","Quota-based systems"],"x-skills-preferred":["Kueue","Volcano","Ray","Kubeflow","Argo Workflows","GPU-based workloads","ML training","Inference pipelines","SLOs","Alerting","Incident response","AI infrastructure","HPC","Large-scale distributed compute environments"],"datePosted":"2026-04-18T15:50:19.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Go, Distributed systems, Cloud infrastructure, Platform engineering, Scheduling, Resource management, Quota-based systems, Kueue, Volcano, Ray, Kubeflow, Argo Workflows, GPU-based workloads, ML training, Inference pipelines, SLOs, Alerting, Incident response, AI infrastructure, HPC, Large-scale distributed compute environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c6419c4-a9b"},"title":"Software Engineer, Compute Efficiency","description":"<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>\n<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>\n</ul>\n<ul>\n<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>\n</ul>\n<ul>\n<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>\n</ul>\n<ul>\n<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>\n</ul>\n<ul>\n<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>\n</ul>\n<ul>\n<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>\n</ul>\n<ul>\n<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n</ul>\n<ul>\n<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>\n</ul>\n<ul>\n<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>\n</ul>\n<ul>\n<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>\n</ul>\n<ul>\n<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>\n</ul>\n<p>Strong candidates may have:</p>\n<ul>\n<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>\n</ul>\n<ul>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n</ul>\n<ul>\n<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<ul>\n<li>Published work in performance optimization and scaling distributed systems</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c6419c4-a9b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108982008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["distributed systems","cloud infrastructure","Kubernetes","Infrastructure as Code","AWS","GCP","Python","Rust","Go","Java"],"x-skills-preferred":["machine learning infrastructure workloads","NCCL","linux kernel tuning","eBPF","performance optimization","scaling distributed systems"],"datePosted":"2026-04-18T15:49:18.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95061695-858"},"title":"Director of Engineering, Media & Entertainment (M&E)","description":"<p>CoreWeave is seeking a Director of Engineering, Media &amp; Entertainment (M&amp;E) to lead the development of next-generation cloud platforms and tools that power modern content creation workflows. This role will drive the engineering strategy and execution for solutions that support visual effects (VFX), animation, rendering, and post-production pipelines used by studios, artists, and creative teams worldwide.</p>\n<p>As a senior engineering leader, you will build and lead high-performing engineering teams responsible for designing scalable infrastructure, developer tools, and user-facing systems that enable creative professionals to run complex production workloads in the cloud. You will collaborate closely with product, design, infrastructure, and customer teams to translate real-world production workflows into reliable, high-performance software platforms.</p>\n<p>This role combines deep engineering leadership with domain expertise in M&amp;E workflows, ensuring that the platform delivers exceptional performance, reliability, and usability for demanding creative workloads.</p>\n<p><strong>Leadership &amp; Strategy</strong></p>\n<p>-Build and scale high-performing engineering teams focused on cloud platforms for media production workloads including rendering, simulation, and content processing. -Recruit, mentor, and develop engineering managers and senior engineers while fostering a culture of innovation, accountability, and collaboration. -Define and execute the long-term engineering strategy for Media &amp; Entertainment products and services. -Partner with Product and Design leaders to translate industry workflows and customer needs into scalable platform capabilities. -Establish engineering best practices for reliability, security, observability, and operational excellence. -Drive roadmap alignment between engineering initiatives and strategic business objectives.</p>\n<p><strong>Technical Leadership</strong></p>\n<p>-Lead the design and development of scalable backend services, APIs, and developer interfaces that power M&amp;E cloud workflows. -Build platforms that support demanding workloads such as rendering, asset processing, and distributed compute pipelines. -Drive architecture decisions for cloud-native systems leveraging technologies such as Kubernetes, distributed services, and infrastructure-as-code. -Ensure the platform enables self-service provisioning, automation, and repeatable workflows for production pipelines. -Establish engineering standards around performance, scalability, and security for enterprise-grade SaaS/PaaS systems. -Oversee system reliability and operational readiness through clear SLOs, monitoring, and runbook-driven on-call practices.</p>\n<p><strong>Product &amp; Workflow Collaboration</strong></p>\n<p>-Work closely with product leadership to define technical requirements aligned with real customer workflows in animation, VFX, and media production. -Engage directly with studios, artists, and technical directors to understand pipeline challenges and incorporate feedback into product development. -Translate industry needs into clear engineering priorities and technical roadmaps. -Guide development teams through product milestones including specification, development, testing, and release. -Ensure engineering efforts balance customer requirements, technical feasibility, and business goals.</p>\n<p>Customer and industry collaboration is critical in identifying workflow needs and transforming them into actionable development plans for engineering teams.</p>\n<p><strong>Operational Excellence</strong></p>\n<p>-Implement engineering processes that support scalable development, including CI/CD pipelines, testing strategies, and code review standards. -Manage development timelines and resource allocation across multiple engineering teams. -Track key operational and customer metrics including performance, reliability, and cost efficiency. -Drive continuous improvement in engineering productivity and system performance. -Partner with QA, support, and customer success teams to ensure high-quality releases and strong user satisfaction.</p>\n<p><strong>Who You Are:</strong></p>\n<p><strong>Required Qualifications</strong></p>\n<p>-10+ years of software engineering experience, including leadership of engineering teams and managers -Proven experience building and scaling cloud-based platforms or distributed systems. -Strong understanding of cloud infrastructure, microservices architecture, and automation technologies. -Experience delivering enterprise SaaS or PaaS products used by external customers. -Excellent leadership, communication, and cross-functional collaboration skills. -Ability to operate strategically while remaining deeply technical and hands-on with architecture decisions.</p>\n<p><strong>Preferred Qualifications</strong></p>\n<p>-Experience building platforms or tools for Media &amp; Entertainment workflows such as VFX, animation, rendering, or post-production pipelines. -Familiarity with industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan. -Experience designing APIs, developer platforms, or automation frameworks used by technical users. -Knowledge of GPU-accelerated compute workloads and distributed rendering systems. -Experience working with Kubernetes, infrastructure-as-code, and large-scale cloud environments.</p>\n<p><strong>What Success Looks Like</strong></p>\n<p>-Engineering teams delivering reliable, scalable platforms used by media studios and creative teams globally. -Clear alignment between product vision, customer workflows, and engineering execution. -Platforms capable of supporting large-scale production workloads with high performance and reliability. -Strong engineering culture focused on innovation, collaboration, and operational excellence.</p>\n<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match.</p>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<p>-Be Curious at Your Core -Act Like an Owner -Empower Employees -Deliver Best-in-Class Client Experiences -Achieve More Together</p>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95061695-858","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4666156006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$206,000 - $303,000","x-skills-required":["Cloud infrastructure","Microservices architecture","Automation technologies","Enterprise SaaS or PaaS products","Leadership","Communication","Cross-functional collaboration","Strategic decision-making"],"x-skills-preferred":["Media & Entertainment workflows","VFX, animation, rendering, or post-production pipelines","Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan","APIs, developer platforms, or automation frameworks","GPU-accelerated compute workloads and distributed rendering systems","Kubernetes, infrastructure-as-code, and large-scale cloud environments"],"datePosted":"2026-04-18T15:49:14.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure, Microservices architecture, Automation technologies, Enterprise SaaS or PaaS products, Leadership, Communication, Cross-functional collaboration, Strategic decision-making, Media & Entertainment workflows, VFX, animation, rendering, or post-production pipelines, Industry tools such as Maya, Houdini, Katana, Cinema 4D, V-Ray, Arnold, or RenderMan, APIs, developer platforms, or automation frameworks, GPU-accelerated compute workloads and distributed rendering systems, Kubernetes, infrastructure-as-code, and large-scale cloud environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":206000,"maxValue":303000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b71a8e89-5f0"},"title":"Multinational Digital Infrastructure - Senior Cloud Engineer","description":"<p>Anduril Industries is seeking a Senior Cloud Engineer to join its Multinational Digital Infrastructure team. As a Senior Cloud Engineer, you will design and implement cloud environments that enable Anduril to effectively operate sovereign programmes in the U.K. and Australia, as well as expanding to other nations as Anduril&#39;s global presence increases.</p>\n<p>You will work across engineering, security, and product teams to ensure our digital infrastructure is secure, scalable, and ready to support emerging mission demands.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, deploy, and maintain enterprise cloud landing zones, security and infrastructure tooling.</li>\n<li>Collaborate with teams across the U.S. and Australia to enable secure connectivity between other sovereign cloud environments.</li>\n<li>Partner with government customers, authorizing officials (AOs), cybersecurity teams, and policy shops to accelerate accreditation, break through legacy barriers, and unlock access for cross-nation engineering teams.</li>\n<li>Implement infrastructure automation (IaC), observability tooling, and secure configuration baselines to support scalable, repeatable environment builds.</li>\n<li>Work closely with product, autonomy, Lattice, and Maritime engineering teams to integrate infrastructure capabilities with platform development, testing, and deployment workflows.</li>\n<li>Act as a technical leader during environment standup, troubleshooting, and validation events; ensure classified systems perform reliably in support of mission-critical needs.</li>\n<li>Support development of next-generation secure architectures for multinational development, data sharing, and mission system integration across Maritime platforms.</li>\n<li>Serve as a technical representative during customer events, exercises, and operational demonstrations to ensure infrastructure readiness and mission success.</li>\n</ul>\n<p>Required qualifications include:</p>\n<ul>\n<li>Ability to obtain and maintain a UK security clearance to SC level.</li>\n<li>Bachelor&#39;s degree in a STEM field or equivalent engineering experience.</li>\n<li>Technical depth in one or more areas, including cloud infrastructure, secure networking, systems engineering, DevSecOps, platform architecture, cybersecurity, identity &amp; access management.</li>\n<li>Specific technology includes: cloud - AWS, Azure; infrastructure as code - Terraform, CloudFormation; SCM - GitHub Enterprise; CI/CD - CircleCI, Gitlab; IDAM + SSO - Okta, AWS Identity Center.</li>\n<li>8+ years of relevant engineering, infrastructure, or technical program execution experience.</li>\n<li>Willingness to travel domestically and internationally as required.</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Experience with secure systems engineering, ideally within UK Government or Defence.</li>\n<li>Experience provisioning large enterprise cloud platforms for hundreds or thousands of users.</li>\n<li>Experience designing or maintaining distributed systems, secure networks, or infrastructure supporting autonomy, AI/ML, or big data workloads.</li>\n<li>Demonstrated ability to work across technical disciplines, influence without authority, and operate in ambiguous and fast-paced environments.</li>\n<li>Experience working with international partners or navigating multi-nation technical or policy workflows.</li>\n</ul>\n<p>The salary range for this role is competitive and includes highly competitive equity grants as part of Anduril&#39;s total compensation package.</p>\n<p>Additional benefits include:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision plans at little to no cost to you.</li>\n<li>Generous time off, including a holiday hiatus in December.</li>\n<li>Family planning &amp; parenting support, including coverage for fertility treatments and adoption.</li>\n<li>Mental health resources, including access to free therapy and life coaching.</li>\n<li>Professional development opportunities, including annual reimbursement for professional development.</li>\n<li>Commuter benefits and relocation assistance.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b71a8e89-5f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5039728007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud infrastructure","Secure networking","Systems engineering","DevSecOps","Platform architecture","Cybersecurity","Identity & access management","AWS","Azure","Terraform","CloudFormation","GitHub Enterprise","CircleCI","Gitlab","Okta","AWS Identity Center"],"x-skills-preferred":["Secure systems engineering","Provisioning large enterprise cloud platforms","Designing or maintaining distributed systems","Infrastructure supporting autonomy, AI/ML, or big data workloads"],"datePosted":"2026-04-18T15:48:12.977Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud infrastructure, Secure networking, Systems engineering, DevSecOps, Platform architecture, Cybersecurity, Identity & access management, AWS, Azure, Terraform, CloudFormation, GitHub Enterprise, CircleCI, Gitlab, Okta, AWS Identity Center, Secure systems engineering, Provisioning large enterprise cloud platforms, Designing or maintaining distributed systems, Infrastructure supporting autonomy, AI/ML, or big data workloads"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0f249232-d14"},"title":"Principal Engineer, Cluster Orchestration","description":"<p>As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale.</p>\n<p>You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Defining the long-term architecture for CoreWeave&#39;s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems.</li>\n<li>Acting as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation.</li>\n<li>Making design decisions that balance performance, reliability, cost, and operational complexity.</li>\n</ul>\n<p>In addition to these responsibilities, you will also lead the evolution of Kubernetes-native control planes, including SUNK and custom operators, and design systems that support workload admission, validation, and rollout, including model onboarding flows.</p>\n<p>You will work closely with cross-functional teams to ensure that the systems you design and implement meet the needs of our customers and are scalable, reliable, and efficient.</p>\n<p>If you have a passion for building large-scale distributed systems and are looking for a challenging and rewarding role, we encourage you to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0f249232-d14","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4658799006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$206,000 to $303,000","x-skills-required":["Kubernetes","Slurm","SUNK","Go","Cloud-native systems development","GPU-heavy platforms for AI training, inference, or HPC workloads"],"x-skills-preferred":["Kueue","Kubeflow","Argo Workflows","Ray","Istio","Knative"],"datePosted":"2026-04-18T15:48:07.140Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Slurm, SUNK, Go, Cloud-native systems development, GPU-heavy platforms for AI training, inference, or HPC workloads, Kueue, Kubeflow, Argo Workflows, Ray, Istio, Knative","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":206000,"maxValue":303000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_40d32156-365"},"title":"Reliability Lead, Common Services","description":"<p>As Reliability Lead, Common Services, you will establish and lead the Reliability Engineering and production operations practice for the Common Services organization. You&#39;ll partner closely with engineering leaders and teams across Common Services to define how we build, release, monitor, and operate critical services,raising the bar on reliability, availability, and operational excellence across the board.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Establish and lead the SRE / production engineering practice for the Common Services organization, including standards for reliability, incident management, and on-call, in partnership with the central Product Engineering organization.</li>\n<li>Develop an Operational Excellence strategy that focuses on not only improving system performance but also monitoring and reducing operational toil</li>\n<li>Partner with engineering and product teams to define SLOs, SLIs, and error budgets for critical Common Services, and ensure these become part of how teams plan and make tradeoffs.</li>\n<li>Own and improve the incident management lifecycle for Common Services, including on-call rotations, escalation paths, incident tooling, post-incident reviews, and follow-through on corrective actions.</li>\n<li>Drive the observability strategy (metrics, logs, traces, dashboards, alerts) for Common Services, ensuring we have actionable visibility into the health, performance, and capacity of key systems.</li>\n<li>Collaborate with engineering leads to design and review architectures for reliability, scalability, resilience, and operability, including failure modes, redundancy, and graceful degradation.</li>\n<li>Lead efforts to automate and harden operational workflows, including deployments, rollbacks, configuration management, change management, and routine maintenance tasks.</li>\n<li>Build strong, trust-based relationships with partner teams and stakeholders, becoming a go-to leader for production readiness and operational risk within Common Services.</li>\n<li>Hire, mentor, and develop SRE and production engineering talent, fostering a culture of continuous improvement, learning from incidents, and humane on-call.</li>\n<li>Partner with other SRE and production engineering leaders across CoreWeave to align on global practices, tools, and reliability goals, representing the needs and constraints of Common Services.</li>\n</ul>\n<p>You will be responsible for defining the reliability strategy, processes, and standards for the Common Services portfolio and driving consistent, high-quality operational practices across multiple teams.</p>\n<p>The base salary range for this role is $206,000 to $303,000.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_40d32156-365","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650165006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$206,000 to $303,000","x-skills-required":["Site Reliability Engineering","Production Engineering","Linux-based production environments","Containers","Orchestration technologies","Observability stacks","Alerting systems","SLIs/SLOs","Error budgets","Incident management","On-call rotations","Escalation paths","Post-incident reviews","Corrective actions","Automation tooling","Infrastructure-as-code","CI/CD pipelines"],"x-skills-preferred":["GPU workloads","High-performance computing","Latency/throughput-sensitive systems","Multi-tenant environments","Multi-region environments","Regulated environments","Service ownership models","Mentoring","Managing senior engineers"],"datePosted":"2026-04-18T15:47:45.370Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, Production Engineering, Linux-based production environments, Containers, Orchestration technologies, Observability stacks, Alerting systems, SLIs/SLOs, Error budgets, Incident management, On-call rotations, Escalation paths, Post-incident reviews, Corrective actions, Automation tooling, Infrastructure-as-code, CI/CD pipelines, GPU workloads, High-performance computing, Latency/throughput-sensitive systems, Multi-tenant environments, Multi-region environments, Regulated environments, Service ownership models, Mentoring, Managing senior engineers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":206000,"maxValue":303000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_87c43ead-4a1"},"title":"Staff Site Reliability Engineer, Security- GCP","description":"<p>Secure Every Identity</p>\n<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>\n<p>Okta&#39;s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Staff Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure.</p>\n<p>We encourage you to prescribe defence-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level.</p>\n<p>Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance.</p>\n<p>We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product.</p>\n<p>This is a high-impact role in a security-centric, fast-paced organisation that is poised for massive growth and success.</p>\n<p>You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap.</p>\n<p>You will focus on engineering security aspects of the systems used across our services.</p>\n<p>Join us and be part of a company that is about to change the cloud computing landscape forever.</p>\n<p>As a Staff Engineer, you should be able to identify gaps, propose innovative solutions, and contribute to roadmaps while driving alignment across multiple teams within the organisation.</p>\n<p>Additionally, you should serve as a role model, providing technical mentorship to junior team members and fostering a culture of learning and growth</p>\n<p>What are we looking for?</p>\n<p>We are looking for a security-first SRE engineer who doesn&#39;t just &#39;flag&#39; issues but builds the automation to solve them.</p>\n<p>You should have a deep-seated intuition for cloud-native security and a proven track record of hardening large-scale GCP and AWS environments.</p>\n<p>As a Technical SME, you will design and build production infrastructure with a &#39;security-at-scale&#39; mindset.</p>\n<p>What You Will Work On?</p>\n<p>Security Evangelism: Lead initiatives to strengthen our security posture for critical infrastructure and promote best practices across the engineering organisation.</p>\n<p>Incident Response &amp; Reliability: Respond to production security incidents, perform root cause analysis, and build automated preventions to ensure high performance and reliability.</p>\n<p>Automated Hardening: Identify manual security processes and automate them using custom tooling and CI/CD integrations.</p>\n<p>Architecture &amp; Documentation: Develop technical documentation, runbooks, and procedures for a 24x7 online environment.</p>\n<p>Platform Evolution: Continuously evolve our monitoring platforms, moving from simple auditing to active, automated prevention.</p>\n<p>Minimum Required Knowledge, Skills, &amp; Abilities:</p>\n<p>Experience: 8+ years of experience architecting and running complex cloud networking and infrastructure, with at least 7+ years specialised in DevSecOps or Cloud Security.</p>\n<p>GCP Expertise: Minimum 3+ years of deep, hands-on experience securing GCP (GKE, GCE, Shared VPC etc).</p>\n<p>Infrastructure as Code (IaC): 10+ years of experience using Terraform and Chef to manage complex cloud resources and OS hardening.</p>\n<p>Automation Mastery: Expert-level proficiency in Go, Python, or Ruby for building custom security tooling and automated remediation.</p>\n<p>Hardened Containers: Proven track record of securing containerised workloads, including image scanning, K8s RBAC, and runtime security tools (e.g., CrowdStrike Falcon, Falco, or gVisor).</p>\n<p>Unflappable Troubleshooting: A &#39;see a problem, fix the problem&#39; mindset with the ability to debug complex networking, IAM, or performance issues under pressure.</p>\n<p>Security Foundations: Strong grasp of Linux internals, OS hardening (CIS benchmarks), and IP protocols (TLS/SSL, DNSSEC, BGP).</p>\n<p>Education: BS in Computer Science or equivalent professional experience.</p>\n<p>Key Responsibilities:</p>\n<p>IAM &amp; Secrets Management: Design and maintain large-scale production IAM policies and secrets management workflows.</p>\n<p>Infrastructure Hardening: Implement and maintain Public Key Infrastructure (PKI) and ensure all GCE/GKE environments meet strict compliance standards.</p>\n<p>Operational Excellence: Utilise industry-standard tools like OSQuery, Splunk, Chronicle, Nessus, or Qualys/Crowdstrike to monitor system health and security telemetry.</p>\n<p>Strategic Rollouts: Lead the phased transition of security policies from Audit/Detection mode to Blocking/Prevention mode, ensuring zero impact on production uptime.</p>\n<p>Bonus Points For:</p>\n<p>Multi-Cloud IAM Governance: Experience designing a unified IAM framework across AWS and GCP, utilising federated Identities such as Workload, Workforce Identity Federation with understanding of SAML &amp; OIDC auth mechanism and automated &#39;Least Privilege&#39; enforcement.</p>\n<p>Cloud-Native Reliability Engineering: Deep understanding of multi-cloud reliability patterns, maintaining high availability (HA) during security patching or infrastructure-wide hardening.</p>\n<p>Hardened Kubernetes Orchestration: Advanced experience securing GKE, EKS, and kOps, specifically implementing Pod Security Standards, Network Policies, and Admission Controllers for a &#39;Zero-Trust&#39; posture.</p>\n<p>Threat Modeling: Security Reviews &amp; Threat Modeling at both Design &amp; Implementation scope.</p>\n<p>The Okta Experience - Supporting Your Well-Being - Driving Social Impact - Developing Talent and Fostering Connection + Community</p>\n<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>\n<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>\n<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>\n<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>\n<p>Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_87c43ead-4a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/6671260","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud-native security","GCP","AWS","DevSecOps","Cloud Security","Terraform","Chef","Go","Python","Ruby","containerised workloads","image scanning","K8s RBAC","runtime security tools","Linux internals","OS hardening","IP protocols","TLS/SSL","DNSSEC","BGP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:47.221Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud-native security, GCP, AWS, DevSecOps, Cloud Security, Terraform, Chef, Go, Python, Ruby, containerised workloads, image scanning, K8s RBAC, runtime security tools, Linux internals, OS hardening, IP protocols, TLS/SSL, DNSSEC, BGP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_99450ad6-e3b"},"title":"Network Engineer - AI/HPC","description":"<p><strong>About the Role</strong></p>\n<p>We are seeking a skilled Network Engineer to join our team at xAI. As a Network Engineer, you will play a critical role in designing and operating large-scale networks for our AI and HPC systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and operate large-scale networks with a deep understanding of congestion control on ethernet and Infiniband</li>\n<li>Develop and optimize network configurations to ensure high performance and availability</li>\n<li>Collaborate with the team to design the next iteration of our backend and front-end networks</li>\n<li>Travel to Memphis to build capacity and participate in a team on-call rotation</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Minimum of 10 years designing and operating large-scale networks with 5 years in the ethernet AI/HPC space</li>\n<li>Deep understanding of congestion control on ethernet with Infiniband an added bonus</li>\n<li>Expertise in creating a portfolio of metrics for performance and operations to optimize the fleet for training and inference traffic</li>\n<li>Experience with Python to automate away repetitive tasks and facilitate daily job working with and analyzing large sets of data</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Opportunity to work with a highly motivated team focused on engineering excellence</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n</ul>\n<p><strong>What We Offer</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work on cutting-edge AI and HPC projects</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you are a motivated and experienced Network Engineer looking for a new challenge, please submit your application, including your resume and cover letter, to [insert contact information].</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_99450ad6-e3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4946691007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["RoCEv2","NCCL","Python","Ethernet","Infiniband","AI training and inference workloads"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:15.340Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Memphis, TN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"RoCEv2, NCCL, Python, Ethernet, Infiniband, AI training and inference workloads"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a45e2e8c-400"},"title":"Staff Software Engineer, Foundational Model Serving","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Foundation Model Serving is the API Product for hosting and serving frontier AI model inference for open source models like Llama, Qwen, and GPT OSS as well as proprietary models like Claude and OpenAI GPT.</p>\n<p>We&#39;re looking for engineers who have owned high scale operational sensitive systems like customer facing APIs, Edge Gateways, ML Inference, or similar services and have an interest in getting deep building LLM APIs and runtimes at scale. As a Staff Engineer, you&#39;ll play a critical role in shaping both the product experience and core infrastructure.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Design and implement core systems and APIs that power Databricks Foundation Model Serving, ensuring scalability, reliability, and operational excellence.</li>\n<li>Partner with product and engineering leadership to define the technical roadmap and long-term architecture for serving workloads.</li>\n<li>Drive architectural decisions and trade-offs to optimize performance, throughput, autoscaling, and operational efficiency for GPU serving workloads.</li>\n<li>Contribute directly to key components across the serving infrastructure , from working in systems like vLLM and SGLang to creating token based rate limiters and optimizers , ensuring smooth and efficient operations at scale.</li>\n<li>Collaborate cross-functionally with product, platform, and research teams to translate customer needs into reliable and performant systems.</li>\n<li>Establish best practices for code quality, testing, and operational readiness, and mentor other engineers through design reviews and technical guidance.</li>\n<li>Represent the team in cross-organizational technical discussions and influence Databricks’ broader AI platform strategy.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years of experience building and operating large-scale distributed systems.</li>\n<li>Experience leading high-scale operationally sensitive backend systems.</li>\n<li>A track record of up-leveling teams engineering excellence.</li>\n<li>Strong foundation in algorithms, data structures, and system design as applied to large-scale, low-latency serving systems.</li>\n<li>Proven ability to deliver technically complex, high-impact initiatives that create measurable customer or business value.</li>\n<li>Strong communication skills and ability to collaborate across teams in fast-moving environments.</li>\n<li>Strategic and product-oriented mindset with the ability to align technical execution with long-term vision.</li>\n<li>Passion for mentoring, growing engineers, and fostering technical excellence.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a45e2e8c-400","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8224683002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["large-scale distributed systems","high-scale operationally sensitive backend systems","algorithms","data structures","system design","low-latency serving systems","GPU serving workloads","vLLM","SGLang","token based rate limiters","optimizers"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:55.798Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, high-scale operationally sensitive backend systems, algorithms, data structures, system design, low-latency serving systems, GPU serving workloads, vLLM, SGLang, token based rate limiters, optimizers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ac0b2f4-6c9"},"title":"Member of Technical Staff - Imagine Product","description":"<p><strong>About the Role</strong></p>\n<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>\n<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>\n<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>\n<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>\n<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>\n<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>\n<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>\n</ul>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>\n<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>\n<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>\n<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>\n<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>\n<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>\n</ul>\n<p><strong>Preferred Skills and Experience</strong></p>\n<ul>\n<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>\n<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>\n<li>Background in AI-driven consumer products or media generation technologies.</li>\n<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>\n</ul>\n<p><strong>Compensation and Benefits</strong></p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ac0b2f4-6c9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://xAI.com","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052027007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Python","Rust","clean, efficient, maintainable, and scalable code","large-scale data infrastructure and pipelines","multi-modal or media-heavy AI applications","production-grade solutions","quality and uptime"],"x-skills-preferred":["real-time systems","inference serving","multi-modal data processing at scale","distributed systems","containerisation","observability tools","performance tuning for AI workloads","AI-driven consumer products","media generation technologies"],"datePosted":"2026-04-18T15:41:51.975Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18ae1499-b22"},"title":"Research Engineer, Discovery","description":"<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>\n<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>\n<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>\n<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>\n<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>\n<li>Develop large scale data pipelines to handle advanced language model training requirements</li>\n<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>\n<li>Are a strong communicator and enjoy working collaboratively</li>\n<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>\n<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>\n<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>\n<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>\n<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>\n<li>Have experience collaborating with other researchers to scale experimental ideas</li>\n<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>\n<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>\n<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>\n<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>\n<li>Familiarity with VM and container orchestration</li>\n<li>Experience with workflow orchestration tools and experiment management systems</li>\n<li>History working with large scale reinforcement learning</li>\n<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>\n</ul>\n<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18ae1499-b22","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4669581008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["large-scale distributed systems","containerization technologies (Docker, Kubernetes)","performance optimization techniques","system architectures for high-throughput ML workloads","data pipelines","distributed storage systems","ML frameworks (PyTorch, JAX, etc.)","GPU/TPU architectures","cloud platforms (AWS, GCP)","VM and container orchestration","workflow orchestration tools","experiment management systems","reinforcement learning","large scale data pipelines (Beam, Spark, Dask, …)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:42.408Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f43bb14-3c4"},"title":"Senior Cloud Engineer","description":"<p>Shield AI is seeking a Senior Cloud Engineer to support its leadership in applied artificial intelligence development. In this role, you will be responsible for engineering, deploying, provisioning, and managing critical cloud systems that drive innovation across Shield AI&#39;s public and private cloud environments, both domestically and internationally.</p>\n<p>As part of the Cloud and Infrastructure team within Enterprise Operations, you will play a key role in ensuring the performance, scalability, and reliability of these systems to support various business units. This position may involve occasional travel to Shield AI locations.</p>\n<p><strong>Responsibilities:</strong></p>\n<p><strong>Engineering:</strong></p>\n<ul>\n<li>Manage and optimize multi-cloud infrastructure (Azure, AWS) for performance, reliability, and scalability.</li>\n<li>Support and optimize cloud and virtual machine environments, assisting with capacity planning, performance monitoring, security compliance, and vulnerability remediation.</li>\n<li>Assist in implementing and maintaining infrastructure systems, including servers, storage, backup solutions, and disaster recovery processes, for both public and private clouds.</li>\n<li>Continuously learn and adapt to emerging technologies and platforms, leveraging automation wherever possible.</li>\n<li>Author and produce the necessary documentation for engineered and maintained systems along with associated processes that supporting teams can leverage.</li>\n<li>Assist in researching, recommending, and developing innovative solutions for complex requirements and issue resolution.</li>\n<li>Collaborate cross-functionally with AI, DevOps, and Security teams to ensure compliance, observability, and resilience in mission-critical environments.</li>\n<li>Participate in Agile methodologies and sound engineering principles.</li>\n</ul>\n<p><strong>Operations and Support:</strong></p>\n<ul>\n<li>Perform daily system monitoring, verifying the integrity and availability of all server resources, systems and key processes, reviewing system and application logs.</li>\n<li>Support system maintenance and upgrades, including OS patching, software configuration, hardware updates, and performance tuning to ensure optimal cloud infrastructure performance.</li>\n<li>Provide escalated support for operational issues possibly during and after normal business hours for systems, workloads, and Kubernetes AI infrastructure.</li>\n<li>Analyze, troubleshoot and resolve system infrastructure and software issues.</li>\n<li>Ability to participate in on-call, emergency, or maintenance roles</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science or related field, or equivalent experience (4+ years) plus an engineer level certification, Azure/AWS Associate, or another similar level certification.</li>\n<li>4 years’ experience supporting applications and systems in a production environment in high-availability, mission-critical, or defense-grade environments preferred.</li>\n<li>Comfortable with operational efficiencies utilizing Infrastructure as Code (IaC) solutions (e.g., Terraform, Ansible).</li>\n<li>Strong understanding of networking concepts (VPCs, VPNs, subnets, routing, firewalls).</li>\n<li>Experience in automating repetitive tasks using scripting languages such as PowerShell, Python, or Bash.</li>\n<li>Experience with deployment and systems administration of at least one type of Linux distribution (i.e. RHEL, Ubuntu)</li>\n<li>Experience with concepts of Microsoft Windows Server administration, Azure and Active Directory environments</li>\n<li>Possesses organizational skills, with a process-oriented mindset, attention to detail, and effective verbal and written communication abilities.</li>\n<li>Ability to work independently to accomplish assigned tasks.</li>\n<li>Solution-oriented, constructive approach to problem-solving.</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience deploying and maintaining workloads in Azure public cloud environments.</li>\n<li>Hands-on experience with containerization and Kubernetes-based workloads.</li>\n<li>Strong understanding of virtualization and private cloud platforms (e.g., VMware, Hyper-V, KVM).</li>\n<li>Background in DevOps, Site Reliability Engineering (SRE), or cloud infrastructure roles.</li>\n<li>Proficiency with configuration management and automation tools (e.g., Ansible, Chef, Puppet, Terraform).</li>\n<li>Experience building and optimizing CI/CD pipelines.</li>\n</ul>\n<p><strong>Salary and Benefits:</strong></p>\n<ul>\n<li>$110,000 - $170,000 a year</li>\n<li>Full-time regular employee offer package: Pay within range listed + Bonus + Benefits + Equity</li>\n<li>Temporary employee offer package: Pay within range listed above + temporary benefits package (applicable after 60 days of employment)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f43bb14-3c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/702e2609-db48-49ab-8bec-d405c956a6ce","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$110,000 - $170,000 a year","x-skills-required":["Cloud Engineering","Multi-cloud infrastructure","Azure","AWS","Networking concepts","Infrastructure as Code","Scripting languages","Linux distribution","Microsoft Windows Server administration","Active Directory environments"],"x-skills-preferred":["Containerization","Kubernetes-based workloads","Virtualization","Private cloud platforms","DevOps","Site Reliability Engineering","Configuration management","Automation tools","CI/CD pipelines"],"datePosted":"2026-04-17T13:01:14.253Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Diego, California / Dallas, Texas / San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud Engineering, Multi-cloud infrastructure, Azure, AWS, Networking concepts, Infrastructure as Code, Scripting languages, Linux distribution, Microsoft Windows Server administration, Active Directory environments, Containerization, Kubernetes-based workloads, Virtualization, Private cloud platforms, DevOps, Site Reliability Engineering, Configuration management, Automation tools, CI/CD pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":110000,"maxValue":170000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93a43345-780"},"title":"FinOps Program Manager","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life.</p>\n<p>The FinOps function is responsible for financial accountability, visibility, and optimization across all engineering-related spend at Plaid. This includes cloud infrastructure, AI/ML and data workloads, third-party SaaS tools, and other technical investments that support Plaid&#39;s products and internal platforms.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Monitors and analyzes engineering spend across cloud, AI/ML, data platforms, and SaaS, identifying trends, anomalies, and optimization opportunities.</li>\n<li>Builds and maintains forecasts for engineering spend, partnering with Finance and engineering leaders to understand drivers, assumptions, and risks.</li>\n<li>Partners with engineering, product, and TPMs to incorporate cost considerations into roadmaps, architectural decisions, and execution plans.</li>\n<li>Leads cost optimization initiatives, such as rightsizing, commitment strategies, and workload efficiency improvements, in collaboration with engineering owners.</li>\n<li>Creates and maintains dashboards and reporting that make spend understandable and actionable for both engineers and executives.</li>\n<li>Implements FinOps practices and processes, including showback/chargeback models, unit economics, and cost ownership frameworks.</li>\n<li>Partners on tooling and automation, working with data and engineering teams to improve cost visibility, forecasting accuracy, and operational efficiency.</li>\n<li>Drives alignment and behavior change, helping teams balance cost, performance, reliability, and velocity through data-driven decision making.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>6–10+ years of relevant experience working at the intersection of engineering, infrastructure, data, or finance in a cloud-native or SaaS environment.</li>\n<li>Proven experience partnering closely with engineering teams to influence decisions involving cloud infrastructure, data platforms, AI/ML workloads, or SaaS spend.</li>\n<li>Working understanding of modern cloud-native architectures, including core components such as compute, storage, networking, data pipelines, and managed services,enough to engage credibly with engineers on design, tradeoffs, and cost drivers.</li>\n<li>Strong foundation in cost analysis, forecasting, budgeting, and variance management, with the ability to translate data into clear, actionable insights.</li>\n<li>Comfort working directly with data, including writing SQL (or effectively using AI-assisted tools to do so) to explore datasets, validate assumptions, and answer ad hoc questions.</li>\n<li>Experience building clear, high-quality dashboards and BI artifacts that are not only accurate, but intuitive and delightful for engineers and leaders to use.</li>\n<li>Demonstrated success driving adoption and behavior change,embedding cost awareness into day-to-day engineering workflows, not just producing reports.</li>\n<li>Experience owning and delivering cross-functional programs end-to-end, often without direct authority or a dedicated team.</li>\n<li>Familiarity with FinOps principles and practices (e.g., shared ownership, showback/chargeback, unit economics, optimization strategies).</li>\n<li>Strong communication skills, with the ability to tailor complex technical and financial concepts for engineering, finance, and executive audiences.</li>\n</ul>\n<p><strong>Nice to Haves</strong></p>\n<ul>\n<li>Hands-on familiarity with cloud cost management tools (e.g., AWS Cost Explorer, GCP Billing, Azure Cost Management, CloudHealth, Cloudability, or similar).</li>\n<li>Experience working with or supporting data platforms and AI/ML workloads, including understanding cost drivers for batch processing, streaming, storage, and model training/inference.</li>\n<li>Exposure to showback/chargeback models, cost allocation strategies, or product-level unit economics.</li>\n<li>Experience improving data models or pipelines that support analytics, reporting, or financial attribution.</li>\n<li>Familiarity with BI tools such as Mode, Tableau, Looker, or similar,and a strong eye for dashboard usability and design.</li>\n<li>Background in a technical role (e.g., engineering, TPM, infra, data, or engineering operations) before moving into a more cross-functional or business-oriented position.</li>\n<li>Experience operating in a high-growth or rapidly scaling environment, where cost structures and investment priorities are evolving quickly.</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93a43345-780","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/acb399b1-e0f8-45f3-bffa-c89c9c573a12","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$172,800-$259,200 per year","x-skills-required":["cloud infrastructure","AI/ML","data platforms","SaaS","cost analysis","forecasting","budgeting","variance management","SQL","data visualization","dashboard creation","cross-functional program management","FinOps principles","showback/chargeback models","unit economics","optimization strategies"],"x-skills-preferred":["cloud cost management tools","data platforms and AI/ML workloads","cost allocation strategies","product-level unit economics","BI tools","technical role background","high-growth environment experience"],"datePosted":"2026-04-17T12:52:18.112Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure, AI/ML, data platforms, SaaS, cost analysis, forecasting, budgeting, variance management, SQL, data visualization, dashboard creation, cross-functional program management, FinOps principles, showback/chargeback models, unit economics, optimization strategies, cloud cost management tools, data platforms and AI/ML workloads, cost allocation strategies, product-level unit economics, BI tools, technical role background, high-growth environment experience","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":172800,"maxValue":259200,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e308ff1b-d8b"},"title":"Software Engineer, DevOps, Research Platform","description":"<p>About Mistral AI\\n\\nAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.\\n\\nWe are a team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.\\n\\nRole Summary\\n\\nWe are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.\\n\\nResponsibilities\\n\\nAs a DevOps/Platform Engineer, your responsibilities will include:\\n\\n* Designing and implementing complex systems (e.g. scale our research CI with a strong focus toward reliability, reproducibility and speed)\\n\\n* Building flexible yet solid and accessible development environment for researchers, so they can focus on core mission.\\n\\n* Designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines.\\n\\n* Optimizing a variety of builds: container images, large libraries compilation times, python environments...\\n\\n* Building strong relationships with researchers, understanding their workflow and enabling them to achieve more by leveraging your expertise.\\n\\n* Communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.\\n\\n* Being part of the team that &quot;platformizes&quot; research and constantly improve the daily experience for researchers while avoiding future roadblocks.\\n\\nAbout You\\n\\n* 5+ years of successful experience in a similar DX / DevOps / SRE role.\\n\\n* Proficiency in software development (Python, Go...) and programming best practices.\\n\\n* Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...\\n\\n* Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...\\n\\n* Technical product mindset (e.g. understanding how to debug poor adoption).\\n\\n* Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).\\n\\n* Ownership, high agency and constantly seeking to learn and improving things for others.\\n\\n* Autonomous, self-driven and able to work well in a fast-paced startup environment.\\n\\n* Low ego and team spirit mindset.\\n\\nYour Application Will Be All The More Interesting If You Also Have:\\n\\n* First hand Bazel (or equivalent) experience.\\n\\n* Strong knowledge of Python&#39;s ecosystem.\\n\\n* Familiarity with GPU based workloads and ecosystems.\\n\\n* Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).\\n\\nHiring Process\\n\\n* Intro Call - 30 min\\n\\n* Tech Culture Interview - 30 min\\n\\n* Technical Rounds - 2 x 45 min\\n\\n* Culture-fit Discussion - 30 min\\n\\n* Reference Calls\\n\\nBy Applying, You Agree To Our Applicant Privacy Policy.\\n\\nAdditional Information\\n\\nLocation &amp; Remote\\n\\nThis role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France &amp; UK. In that case, we ask all new hires to visit our local office:\\n\\n* for the first week of their onboarding (accommodation and travelling covered)\\n\\n* then at least 3 days per month\\n\\nWhat We Offer\\n\\n* Competitive salary and equity\\n\\n* Health insurance\\n\\n* Transportation allowance\\n\\n* Sport allowance\\n\\n* Meal vouchers\\n\\n* Private pension plan\\n\\n* Parental: Generous parental leave policy\\n\\n* Visa sponsorship\\n\\nBy Applying, You Agree To Our Applicant Privacy Policy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e308ff1b-d8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software development","python","go","site reliability engineering","infrastructure management","CI/CD","containerization","orchestration","infra-as-code","monitoring","logging","alerting","observability"],"x-skills-preferred":["bazel","python's ecosystem","gpu based workloads","full remote environments"],"datePosted":"2026-04-17T12:48:20.869Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, python, go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, bazel, python's ecosystem, gpu based workloads, full remote environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ffccb977-f95"},"title":"Senior Site Reliability Engineer","description":"<p>Are you excited by the idea of building fast, reliable, and intelligent infrastructure for a product used by engineering teams around the world? We&#39;re looking for a Senior Site Reliability Engineer to join the Backstage team at Spotify. We&#39;re building the next generation of our developer platform , one that doesn&#39;t just manage software, but actively helps create and maintain it through AI-native workflows.</p>\n<p>In 2026, SRE isn&#39;t just about uptime; it&#39;s about symbiosis. As part of our growing engineering team, you&#39;ll design, build, and operate the cloud infrastructure behind our external developer portal product and our internal fleet of background coding agents. You&#39;ll collaborate closely with experienced engineers (both human and AI-assisted) while operating at real-world scale, with deep observability, strong safety boundaries, and the unique reliability challenges of agentic production systems.</p>\n<p>Backstage is more than just a platform , it&#39;s a foundational force in the developer community. Born out of Spotify&#39;s quest for better developer tooling, Backstage now powers developer portals across the globe. But we didn&#39;t stop at catalogs and templates. Today, Backstage is becoming the command center for AI-native engineering. From enterprises orchestrating large-scale migrations to fast-moving teams using AI to improve velocity and quality, our solutions are redefining what great developer experience looks like.</p>\n<p>As part of the Backstage team, you&#39;ll shape developer experience for companies large and small, for our thriving open-source community, and for Spotify itself. You&#39;ll help define how reliable, secure infrastructure enables the next wave of agentic developer tooling.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own fleet reliability. Lead the reliability, security, and scalability strategy for Portal&#39;s SaaS infrastructure, including the runtime environments that power our platform and LLM-driven agent workflows. Define SLOs, drive capacity planning, and ensure our systems meet the demands of a rapidly growing product.</li>\n</ul>\n<ul>\n<li>Architect for the agentic era. Design and evolve infrastructure on GCP and AWS using Terraform and infrastructure-from-code patterns. Shape how we structure environments for non-deterministic AI workloads , including sandboxing, resource isolation, cost governance, and security boundaries.</li>\n</ul>\n<ul>\n<li>Drive operational excellence. Evolve our incident management, on-call, and postmortem practices. Leverage AI assistants to accelerate root cause analysis and build increasingly self-healing capabilities into our production systems.</li>\n</ul>\n<ul>\n<li>Lead fullstack reliability. Operate across a modern web stack (TypeScript, React, Python). While not frontend-heavy, you&#39;ll diagnose and resolve issues across the stack and drive reliability improvements end-to-end.</li>\n</ul>\n<ul>\n<li>Mentor and multiply. Raise the reliability IQ of the broader engineering team. Establish SRE best practices, conduct production-readiness reviews, and mentor engineers on operational thinking.</li>\n</ul>\n<ul>\n<li>Shape the roadmap. Partner with engineering and product leadership to evolve our infrastructure in step with generative AI features. Translate operational insights into strategic input on the product roadmap.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You have 5+ years of hands-on experience operating cloud infrastructure (GCP and/or AWS), using Terraform and Kubernetes to run production systems at scale.</li>\n</ul>\n<ul>\n<li>You have practical experience , or a strong demonstrated interest , in operating LLM-based systems, RAG pipelines, or agentic workloads, and understand the reliability challenges of non-deterministic systems.</li>\n</ul>\n<ul>\n<li>You think in distributed systems first principles , consistency, availability, partition tolerance , and translate that thinking into pragmatic infrastructure decisions.</li>\n</ul>\n<ul>\n<li>You are proficient in at least one modern language (TypeScript, Java, Go, or Python) and comfortable navigating large, heterogeneous codebases, including environments where AI-generated PRs are common.</li>\n</ul>\n<ul>\n<li>You build automation and improve systems so that whole categories of operational issues disappear over time.</li>\n</ul>\n<ul>\n<li>You communicate complex infrastructure trade-offs clearly to both technical and non-technical stakeholders, and you write postmortems that lead to meaningful change.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ffccb977-f95","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/fdfe281d-889c-478a-8f27-c9bc36b2b0cf","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$164,448–$234,926 USD","x-skills-required":["cloud infrastructure","Terraform","Kubernetes","LLM-based systems","RAG pipelines","agentic workloads","distributed systems","TypeScript","Java","Go","Python"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:18:50.967Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure, Terraform, Kubernetes, LLM-based systems, RAG pipelines, agentic workloads, distributed systems, TypeScript, Java, Go, Python","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":164448,"maxValue":234926,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a560bd4c-a1a"},"title":"Cloud Security Engineer","description":"<p>We&#39;re looking for a Cloud Security Engineer to join our team. As a Cloud Security Engineer at Starling, you&#39;ll be building and supporting tooling and infrastructure that spans across AWS and GCP supporting our internal operations and interfacing with other teams to deliver the services that support our business.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Engineer Secure Foundations: You will lead the design and implementation of critical security services, with a heavy focus on building robust Identity and Access Management (IAM) systems and automated, API-driven certificate management workflows.</li>\n<li>Security-as-Code &amp; Scalability: Leveraging a software-first philosophy, you will develop and maintain high-quality, scalable security tooling and middleware within ECS and Kubernetes environments, ensuring security logic is integrated directly into the deployment pipeline.</li>\n<li>Collaborative Code Ownership: You will serve as a technical authority in cross-functional code reviews, acting as an engineering peer who helps teams bake security into their services from the first line of code to the final pull request.</li>\n<li>Proactive System Hardening: You will stay ahead of the evolving threat landscape by treating security as a continuous engineering challenge,proactively identifying vulnerabilities and architecting technical solutions to fortify our global ecosystem.</li>\n</ul>\n<p>Professional Requirements:</p>\n<ul>\n<li>Demonstrated ability to architect secure, distributed systems with a focus on programmatic IAM and automated, API-driven PKI management.</li>\n<li>Extensive experience with Infrastructure as Code (IaC) in Terraform and a deep commitment to writing clean, maintainable, and production-grade code,ideally in Golang.</li>\n<li>A test-first mentality toward security, with experience building unit and integration tests into CI/CD pipelines to ensure that security guardrails are as reliable as the features they protect.</li>\n<li>A strong conceptual grasp of cryptographic primitives and hands-on experience securing containerized workloads and service meshes within ECS and Kubernetes.</li>\n<li>A track record of taking end-to-end ownership of complex technical projects, from initial design docs and RFCs through to deployment and observability.</li>\n<li>A belief that if it isn&#39;t tested, it&#39;s broken, and a drive to proactively identify and fix vulnerabilities by treating security as a continuous engineering challenge.</li>\n</ul>\n<p>Our Team Philosophy:\nThe Security Engineering team is a diverse and dynamic group passionate about building secure and resilient systems. We&#39;re enthusiastic about security, but we&#39;re not about rigid, one-size-fits-all controls. We believe in striking a balance between protecting our systems and empowering our developers to build and innovate.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a560bd4c-a1a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Starling","sameAs":"https://www.starlingbank.com/","logo":"https://logos.yubhub.co/starlingbank.com.png"},"x-apply-url":"https://apply.workable.com/j/3B7E26FC24","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud Security","AWS","GCP","Identity and Access Management","API-driven Certificate Management","Infrastructure as Code","Terraform","Golang","Cryptographic Primitives","Containerized Workloads","Service Meshes"],"x-skills-preferred":[],"datePosted":"2026-03-20T16:14:58.088Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Cloud Security, AWS, GCP, Identity and Access Management, API-driven Certificate Management, Infrastructure as Code, Terraform, Golang, Cryptographic Primitives, Containerized Workloads, Service Meshes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_871d4845-25a"},"title":"Software Engineer, DevOps, Research Platform","description":"<p>We are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.</p>\n<p>As a DevOps/Platform Engineer, your responsibilities will include designing and implementing complex systems, building flexible yet solid and accessible development environment for researchers, designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines, optimizing a variety of builds, building strong relationships with researchers, communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.</p>\n<p>About you:</p>\n<ul>\n<li>5+ years of successful experience in a similar DX / DevOps / SRE role.</li>\n<li>Proficiency in software development (Python, Go...) and programming best practices.</li>\n<li>Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...</li>\n<li>Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...</li>\n<li>Technical product mindset (e.g. understanding how to debug poor adoption).</li>\n<li>Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).</li>\n<li>Ownership, high agency and constantly seeking to learn and improving things for others.</li>\n<li>Autonomous, self-driven and able to work well in a fast-paced startup environment.</li>\n<li>Low ego and team spirit mindset.</li>\n</ul>\n<p>Your application will be all the more interesting if you also have:</p>\n<ul>\n<li>First hand Bazel (or equivalent) experience.</li>\n<li>Strong knowledge of Python&#39;s ecosystem.</li>\n<li>Familiarity with GPU based workloads and ecosystems.</li>\n<li>Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_871d4845-25a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai/careers"},"x-apply-url":"https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software development","Python","Go","site reliability engineering","infrastructure management","CI/CD","containerization","orchestration","infra-as-code","monitoring","logging","alerting","observability"],"x-skills-preferred":["Bazel","Python's ecosystem","GPU based workloads and ecosystems","full remote environments"],"datePosted":"2026-03-10T11:31:49.456Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, Python, Go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, Bazel, Python's ecosystem, GPU based workloads and ecosystems, full remote environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_69369815-a11"},"title":"Associate/Vice President, AI Infrastructure Engineer","description":"<p>At BlackRock, technology underpins everything we do. AI is a core strategic priority for the firm, embedded across Aladdin and our investment, client, and operational platforms. We are seeking an AI Infrastructure Engineer to help build and operate the foundational infrastructure that enables AI systems to scale safely, securely, and reliably across the enterprise.</p>\n<p>This role sits within Aladdin Platform Engineering and focuses on the infrastructure and platform services required to support machine learning models, large language models (LLMs), and emerging AI capabilities in production. The successful candidate will work closely with AI Engineers, Data Scientists, Platform Engineers, Security, and Product partners to deliver resilient, cloud native AI platforms in a highly regulated environment.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Design, build, and operate AI-focused infrastructure platforms supporting model development, training, evaluation, and inference.</li>\n<li>Engineer scalable, reliable, and secure cloud-native services to support AI workloads across AWS, Azure, and hybrid environments.</li>\n<li>Partner with AI Engineering and Data Science teams to improve developer experience, performance, and operational stability of AI systems.</li>\n<li>Enable production deployment of ML models and LLMs within governed enterprise environments, aligned with firmwide risk and compliance standards.</li>\n<li>Implement and maintain infrastructure as code and automation to ensure repeatable, auditable platform provisioning.</li>\n<li>Build and operate observability, monitoring, and alerting solutions for AI platforms, ensuring availability, performance, and cost transparency.</li>\n<li>Collaborate with Security and Risk partners to integrate identity, access controls, data protection, and governance into AI infrastructure.</li>\n<li>Contribute to architectural decisions and technical standards for AI platforms across Aladdin.</li>\n<li>Participate in on-call rotations and operational support as required for critical platforms.</li>\n<li>Continuously evaluate emerging AI infrastructure technologies and apply them pragmatically within BlackRock’s enterprise context.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Strong experience in cloud infrastructure, platform engineering, or systems engineering roles.</li>\n<li>4+ hands-on expertise with AWS and/or Azure and/or GCP, including Azure ML, Azure Foundry, AWS Bedrock, Google Vertex, as well as cloud compute, networking, storage, and security services.</li>\n<li>Understanding of ML platform operations and governance concepts, including model deployment strategies, lifecycle management, monitoring/observability, and Disaster Recovery</li>\n<li>Experience supporting LLMs, generative AI platforms, or model serving infrastructure.</li>\n<li>Experience supporting AI and machine learning workloads, with exposure to managed compute for model training and fine-tuning, experimentation over large datasets, and end-to-end MLOps pipeline flow including data ingestion, training, validation, and deployment.</li>\n<li>Proficiency with Infrastructure as Code tools (e.g., Terraform, ARM/Bicep, CloudFormation).</li>\n<li>Strong programming or scripting skills (e.g., Python, Bash, or similar).</li>\n<li>Experience building and operating containerized and Kubernetes-based platforms.</li>\n<li>Solid understanding of reliability, scalability, observability, and operational best practices.</li>\n<li>Ability to work effectively in cross-functional teams and communicate complex technical concepts clearly.</li>\n</ul>\n<p><strong>Our Benefits</strong></p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>\n<p><strong>Our Hybrid Work Model</strong></p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_69369815-a11","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/2JsY2bUdeEEzUfhn796RPb/associate%2Fvice-president%2C-ai-infrastructure-engineer-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Azure","GCP","Cloud compute","Networking","Storage","Security services","ML platform operations","Governance concepts","Model deployment strategies","Lifecycle management","Monitoring/observability","Disaster Recovery","LLMs","Generative AI platforms","Model serving infrastructure","AI and machine learning workloads","Managed compute","Fine-tuning","Experimentation","End-to-end MLOps pipeline flow","Data ingestion","Training","Validation","Deployment","Infrastructure as Code","Terraform","ARM/Bicep","CloudFormation","Programming","Scripting","Containerized and Kubernetes-based platforms","Reliability","Scalability","Observability","Operational best practices"],"x-skills-preferred":["GPU or accelerator-based infrastructure","Financial services or highly regulated industries","Multicloud architectures and enterprise governance requirements"],"datePosted":"2026-03-09T16:39:47.983Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh, Scotland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Azure, GCP, Cloud compute, Networking, Storage, Security services, ML platform operations, Governance concepts, Model deployment strategies, Lifecycle management, Monitoring/observability, Disaster Recovery, LLMs, Generative AI platforms, Model serving infrastructure, AI and machine learning workloads, Managed compute, Fine-tuning, Experimentation, End-to-end MLOps pipeline flow, Data ingestion, Training, Validation, Deployment, Infrastructure as Code, Terraform, ARM/Bicep, CloudFormation, Programming, Scripting, Containerized and Kubernetes-based platforms, Reliability, Scalability, Observability, Operational best practices, GPU or accelerator-based infrastructure, Financial services or highly regulated industries, Multicloud architectures and enterprise governance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a51375e8-30e"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost. Our work spans today&#39;s frontier AI workloads and directly shapes the next generation of accelerators, system architectures, and large-scale AI platforms. We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures. This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale. In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>About the Team</p>\n<p>We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.</p>\n<p>Microsoft Superintelligence Team</p>\n<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in—come and join us as we work on our next generation of models!</p>\n<p>Responsibilities</p>\n<p>Lead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks. Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements. Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems. Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps. Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations. Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs. Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams. Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a51375e8-30e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["Experience designing or operating large-scale AI clusters for training or inference","Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications","Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Background in performance modeling and capacity planning for future hardware generations","Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:18:41.443Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, Experience designing or operating large-scale AI clusters for training or inference, Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications, Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Background in performance modeling and capacity planning for future hardware generations, Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd1a0d16-311"},"title":"Member of Technical Staff, Software Co-Design AI HPC Systems","description":"<p>Our team&#39;s mission is to architect, co-design, and productionize next-generation AI systems at datacenter scale. We operate at the intersection of models, systems software, networking, storage, and AI hardware, optimizing end-to-end performance, efficiency, reliability, and cost.</p>\n<p>We pursue this mission through deep hardware–software co-design, combining rigorous systems thinking with hands-on engineering. The team invests heavily in understanding real production workloads large-scale training, inference, and emerging multimodal models and translating those insights into concrete improvements across the stack: from kernels, runtimes, and distributed systems, all the way down to silicon-level trade-offs and datacenter-scale architectures.</p>\n<p>This role sits at the boundary between exploration and production. You will work closely with internal infrastructure, hardware, compiler, and product teams, as well as external partners across the hardware and systems ecosystem. Our operating model emphasizes rapid ideation and prototyping, followed by disciplined execution to drive high-leverage ideas into production systems that operate at massive scale.</p>\n<p>In addition to delivering real-world impact on large-scale AI platforms, the team actively contributes to the broader research and engineering community. Our work aligns closely with leading communities in ML systems, distributed systems, computer architecture, and high-performance computing, and we regularly publish, prototype, and open-source impactful technologies where appropriate.</p>\n<p>Microsoft Superintelligence Team\nMicrosoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>\n<p>Responsibilities\nLead the co-design of AI systems across hardware and software boundaries, spanning accelerators, interconnects, memory systems, storage, runtimes, and distributed training/inference frameworks.</p>\n<p>Drive architectural decisions by analyzing real workloads, identifying bottlenecks across compute, communication, and data movement, and translating findings into actionable system and hardware requirements.</p>\n<p>Co-design and optimize parallelism strategies, execution models, and distributed algorithms to improve scalability, utilization, reliability, and cost efficiency of large-scale AI systems.</p>\n<p>Develop and evaluate what-if performance models to project system behavior under future workloads, model architectures, and hardware generations, providing early guidance to hardware and platform roadmaps.</p>\n<p>Partner with compiler, kernel, and runtime teams to unlock the full performance of current and next-generation accelerators, including custom kernels, scheduling strategies, and memory optimizations.</p>\n<p>Influence and guide AI hardware design at system and silicon levels, including accelerator microarchitecture, interconnect topology, memory hierarchy, and system integration trade-offs.</p>\n<p>Lead cross-functional efforts to prototype, validate, and productionize high-impact co-design ideas, working across infrastructure, hardware, and product teams.</p>\n<p>Mentor senior engineers and researchers, set technical direction, and raise the overall bar for systems rigor, performance engineering, and co-design thinking across the organization.</p>\n<p>Qualifications\nBachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Additional or Preferred Qualifications\nMaster’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Strong background in one or more of the following areas: AI accelerator or GPU architectures Distributed systems and large-scale AI training/inference High-performance computing (HPC) and collective communications ML systems, runtimes, or compilers Performance modeling, benchmarking, and systems analysis Hardware–software co-design for AI workloads Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development.</p>\n<p>Proven ability to work across organizational boundaries and influence technical decisions involving multiple stakeholders. Experience designing or operating large-scale AI clusters for training or inference. Deep familiarity with LLMs, multimodal models, or recommendation systems, and their systems-level implications. Experience with accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand). Background in performance modeling and capacity planning for future hardware generations. Prior experience contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews. Publications, patents, or open-source contributions in systems, architecture, or ML systems are a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd1a0d16-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-co-design-ai-hpc-systems-mai-superintelligence-team-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","AI accelerator or GPU architectures","Distributed systems and large-scale AI training/inference","High-performance computing (HPC) and collective communications","ML systems, runtimes, or compilers","Performance modeling, benchmarking, and systems analysis","Hardware–software co-design for AI workloads","Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development"],"x-skills-preferred":["LLMs, multimodal models, or recommendation systems, and their systems-level implications","Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand)","Performance modeling and capacity planning for future hardware generations","Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews","Publications, patents, or open-source contributions in systems, architecture, or ML systems"],"datePosted":"2026-03-08T22:13:30.666Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, AI accelerator or GPU architectures, Distributed systems and large-scale AI training/inference, High-performance computing (HPC) and collective communications, ML systems, runtimes, or compilers, Performance modeling, benchmarking, and systems analysis, Hardware–software co-design for AI workloads, Proficiency in systems-level programming (e.g., C/C++, CUDA, Python) and performance-critical software development, LLMs, multimodal models, or recommendation systems, and their systems-level implications, Accelerator interconnects and communication stacks (e.g., NCCL, MPI, RDMA, high-speed Ethernet or InfiniBand), Performance modeling and capacity planning for future hardware generations, Contributing to or leading hardware roadmaps, silicon bring-up, or platform architecture reviews, Publications, patents, or open-source contributions in systems, architecture, or ML systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_139cd1f4-231"},"title":"Software Engineer, Compute Efficiency","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p>At Anthropic, we are building some of the most complex and large-scale AI infrastructure in the world. As that infrastructure scales rapidly, so does the imperative to optimise how we use it. As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable—without compromising reliability or latency.</p>\n<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimisation frameworks that ensure every dollar of our infrastructure investment delivers maximum value. This is a high-impact, cross-functional role at the intersection of systems engineering, financial optimisation, and AI infrastructure.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilisation, and costs across our cloud and datacentre fleets.</li>\n</ul>\n<ul>\n<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimise their resource consumption.</li>\n</ul>\n<ul>\n<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Partner closely with cloud service providers and internal stakeholders to optimise cluster configurations, workload placement, and resource utilisation across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>\n</ul>\n<ul>\n<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>\n</ul>\n<ul>\n<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>\n</ul>\n<ul>\n<li>Drive architectural improvements and code-level optimisations across multiple services and platforms to deliver measurable utilisation and performance gains.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>\n</ul>\n<ul>\n<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n</ul>\n<ul>\n<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>\n</ul>\n<ul>\n<li>Experience optimising end-to-end performance of distributed systems, including workload right-sizing and resource utilisation tuning.</li>\n</ul>\n<ul>\n<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>\n</ul>\n<ul>\n<li>Experience designing or working with performance and utilisation monitoring tools in large-scale, distributed environments.</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills—you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>\n</ul>\n<ul>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n</ul>\n<ul>\n<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<ul>\n<li>Published work in performance optimisation and scaling distributed systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_139cd1f4-231","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108982008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $405,000USD","x-skills-required":["distributed systems","cloud infrastructure","Kubernetes","Infrastructure as Code","AWS","GCP","Python","Rust","Go","Java","performance optimisation","scalability","continuous improvement"],"x-skills-preferred":["machine learning infrastructure workloads","NCCL","linux kernel tuning","eBPF","systems design tradeoffs","published work in performance optimisation"],"datePosted":"2026-03-08T13:56:57.417Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, performance optimisation, scalability, continuous improvement, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, systems design tradeoffs, published work in performance optimisation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3cc256d7-b0a"},"title":"Transaction Manager","description":"<p>As a Transaction Manager at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our data centre capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>\n<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems, requiring you to bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help identify data centre capacity opportunities and options through management of network relationships across data centre developer, broker, and power contacts.</li>\n<li>Lead the RFP and commercial sourcing process for specific data centre deals, managing developer outreach, proposal evaluation, and competitive selection processes</li>\n<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>\n<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>\n<li>Serve as project manager for cross-functional stakeholder engagement, coordinating due diligence teams, internal and external legal counsel, network organisation, platform engineers, and finance organisation to ensure alignment prior to lease execution</li>\n<li>Act as the single point of contact (SPOC) for auxiliary organisations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing process status</li>\n<li>Develop and maintain transaction timelines, tracking critical path items and proactively identifying risks that could impact deal closure</li>\n<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint</li>\n<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in transaction management, commercial real estate, data centre leasing, or infrastructure procurement</li>\n<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>\n<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>\n<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>\n<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>\n<li>Demonstrate exceptional communication skills, able to serve as an effective liaison between internal stakeholders and external partners</li>\n<li>Are highly organised with strong attention to detail while maintaining focus on strategic deal objectives</li>\n<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>\n<li>Have a collaborative mindset and can build trust with diverse stakeholder groups across the organisation</li>\n</ul>\n<p>It&#39;s a bonus if you:</p>\n<ul>\n<li>Have experience with data centre or hyperscale infrastructure transactions specifically</li>\n<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>\n<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>\n<li>Possess familiarity with data centre developer ecosystems and market dynamics</li>\n<li>Have experience in high-growth technology companies managing infrastructure expansion</li>\n<li>Understand utility coordination, power procurement, or energy considerations in data centre transactions</li>\n<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>\n</ul>\n<p>The annual compensation range for this role is $365,000 - $435,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3cc256d7-b0a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5099080008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000 - $435,000 USD","x-skills-required":["transaction management","commercial real estate","data centre leasing","infrastructure procurement","RFP processes","competitive sourcing","project management","negotiation skills","communication skills","collaboration","attention to detail"],"x-skills-preferred":["data centre or hyperscale infrastructure transactions","AI/ML workloads","legal teams","data centre developer ecosystems","high-growth technology companies","utility coordination","power procurement","energy considerations","corporate development","strategic partnerships","infrastructure investment"],"datePosted":"2026-03-08T13:56:15.962Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"transaction management, commercial real estate, data centre leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, communication skills, collaboration, attention to detail, data centre or hyperscale infrastructure transactions, AI/ML workloads, legal teams, data centre developer ecosystems, high-growth technology companies, utility coordination, power procurement, energy considerations, corporate development, strategic partnerships, infrastructure investment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":435000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ef0c826-856"},"title":"Engineering Manager, Safeguards Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Anthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly — and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organisation access that data safely and ergonomically.</p>\n<p>As Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements — and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities</li>\n<li>Own the strategy and execution for porting the safeguards offline data stack — including PII storage and tooling — across new cloud and deployment environments as Anthropic expands</li>\n<li>Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints</li>\n<li>Drive tooling and architecture decisions that maximise data retention within the bounds of our privacy and compliance requirements</li>\n<li>Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)</li>\n<li>Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts</li>\n<li>Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination</li>\n<li>Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals</li>\n<li>Partner with recruiting to attract, hire, and retain strong engineering talent</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 4+ years of front-line engineering management experience</li>\n<li>Have a track record of leading teams that build and operate data infrastructure at scale</li>\n<li>Have hands-on software engineering experience as an individual contributor prior to moving into management</li>\n<li>Have a strong understanding of data privacy principles, PII handling, and compliance frameworks</li>\n<li>Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities</li>\n<li>Have experience working cross-functionally across infrastructure, product, and compliance or security teams</li>\n<li>Are clear and persuasive communicators, both in writing and in person</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience with multi-cloud or multi-region data portability, particularly in regulated environments</li>\n<li>Have built privacy-preserving data pipelines or interfaces for ML workloads</li>\n<li>Have experience with enterprise data contracts or zero data retention architectures</li>\n<li>Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data</li>\n<li>Have a passion for building diverse and inclusive teams</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ef0c826-856","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5103078008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000USD\n£325,000 - £390,000GBP","x-skills-required":["data infrastructure","data privacy","compliance frameworks","software engineering","team management","cross-functional collaboration","communication","data portability","multi-cloud","multi-region","regulated environments","privacy-preserving data pipelines","ML workloads","enterprise data contracts","zero data retention architectures"],"x-skills-preferred":["in-memory storage","compute for sensitive data","novel approaches to data processing","diverse and inclusive teams"],"datePosted":"2026-03-08T13:42:50.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK; New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, data privacy, compliance frameworks, software engineering, team management, cross-functional collaboration, communication, data portability, multi-cloud, multi-region, regulated environments, privacy-preserving data pipelines, ML workloads, enterprise data contracts, zero data retention architectures, in-memory storage, compute for sensitive data, novel approaches to data processing, diverse and inclusive teams","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_325c968b-d59"},"title":"Inference Technical Lead, Sora","description":"<p><strong>Inference Technical Lead, Sora</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Research</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.</p>\n<p>_<strong>This role is critical to scaling the team’s broader goals - it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.</strong>_</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Perform engineering efforts focused on improving model serving, inference performance, and system efficiency</li>\n</ul>\n<ul>\n<li>Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability</li>\n</ul>\n<ul>\n<li>Partner closely with research and product teams to ensure our models perform effectively at scale</li>\n</ul>\n<ul>\n<li>Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Have deep expertise in model performance optimization, particularly at the inference layer</li>\n</ul>\n<ul>\n<li>Have a strong background in kernel-level systems, data movement, and low-level performance tuning</li>\n</ul>\n<ul>\n<li>Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads</li>\n</ul>\n<ul>\n<li>Can navigate ambiguity, set technical direction, and drive complex initiatives to completion</li>\n</ul>\n<p>_<strong>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong>_</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_325c968b-d59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3c2d1178-777f-4613-a084-75a3d37cd1af","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$380K • Offers Equity","x-skills-required":["GPU Inference Engineer","Model Performance Optimization","Kernel-Level Systems","Data Movement","Low-Level Performance Tuning"],"x-skills-preferred":["AI Systems","Multimodal Workloads","Complex Initiatives","Technical Direction"],"datePosted":"2026-03-06T18:42:26.117Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU Inference Engineer, Model Performance Optimization, Kernel-Level Systems, Data Movement, Low-Level Performance Tuning, AI Systems, Multimodal Workloads, Complex Initiatives, Technical Direction","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":380000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d3a39f4c-d95"},"title":"Software Engineer, Inference - Multi Modal","description":"<p><strong>Software Engineer, Inference - Multi Modal</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $555K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we partner closely with Research to bring the next generation of models into the world. We&#39;re a small, fast-moving team of engineers focused on delivering a world-class developer experience while pushing the boundaries of what AI can do.</p>\n<p>We’re expanding into multimodal inference, building the infrastructure needed to serve models that handle image, audio, and other non-text modalities. These workloads are inherently more heterogeneous and experimental, involving diverse model sizes and interactions, more complex input/output formats, and tighter coordination with product and research.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a software engineer to help us serve OpenAI’s multimodal models at scale. You’ll be part of a small team responsible for building reliable, high-performance infrastructure for serving real-time audio, image, and other MM workloads in production.</p>\n<p>This work is inherently cross-functional: you’ll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. You&#39;ll build and optimize the systems that let users generate speech, understand images, and interact with models in ways far beyond text.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and implement inference infrastructure for large-scale multimodal models.</li>\n</ul>\n<ul>\n<li>Optimize systems for high-throughput, low-latency delivery of image and audio inputs and outputs.</li>\n</ul>\n<ul>\n<li>Enable experimental research workflows to transition into reliable production services.</li>\n</ul>\n<ul>\n<li>Collaborate closely with researchers, infra teams, and product engineers to deploy state-of-the-art capabilities.</li>\n</ul>\n<ul>\n<li>Contribute to system-level improvements including GPU utilization, tensor parallelism, and hardware abstraction layers.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience building and scaling inference systems for LLMs or multimodal models.</li>\n</ul>\n<ul>\n<li>Have worked with GPU-based ML workloads and understand the performance dynamics of large models, especially with complex data like images or audio.</li>\n</ul>\n<ul>\n<li>Enjoy experimental, fast-evolving work and collaborating closely with research.</li>\n</ul>\n<ul>\n<li>Are comfortable dealing with systems that span networking, distributed compute, and high-throughput data handling.</li>\n</ul>\n<ul>\n<li>Have familiarity with inference tooling like vLLM, TensorRT-LLM, or custom model parallel systems.</li>\n</ul>\n<ul>\n<li>Own problems end-to-end and are excited to operate in ambiguous, fast-moving spaces.</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Experience working with image generation or audio synthesis models in production.</li>\n</ul>\n<ul>\n<li>Exposure to distributed ML training or system-efficient model design.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d3a39f4c-d95","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/4d14449e-5e7f-45d4-b103-8776a6c87086","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$295K – $555K • Offers Equity","x-skills-required":["Software Engineer","Inference Infrastructure","GPU-based ML Workloads","Tensor Parallelism","Hardware Abstraction Layers","vLLM","TensorRT-LLM","Custom Model Parallel Systems"],"x-skills-preferred":["Image Generation","Audio Synthesis","Distributed ML Training","System-Efficient Model Design"],"datePosted":"2026-03-06T18:31:07.882Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software Engineer, Inference Infrastructure, GPU-based ML Workloads, Tensor Parallelism, Hardware Abstraction Layers, vLLM, TensorRT-LLM, Custom Model Parallel Systems, Image Generation, Audio Synthesis, Distributed ML Training, System-Efficient Model Design","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9f9ededf-ecb"},"title":"Software Engineer, Frontier Clusters Infrastructure","description":"<p><strong>Software Engineer, Frontier Clusters Infrastructure</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Frontier Systems team at OpenAI builds, launches, and supports the largest supercomputers in the world that OpenAI uses for its most cutting edge model training.</p>\n<p>We take data center designs, turn them into real, working systems and build any software needed for running large-scale frontier model trainings.</p>\n<p>Our mission is to bring up, stabilize and keep these hyperscale supercomputers reliable and efficient during the training of the frontier models.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for engineers to operate the next generation of compute clusters that power OpenAI’s frontier research.</p>\n<p>This role blends distributed systems engineering with hands-on infrastructure work on our largest datacenters. You will scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layer that hides the complexity of a magnitude of nodes across multiple data centers.</p>\n<p>You will work at the intersection of hardware and software, where speed and reliability are critical. Expect to manage fast-moving operations, quickly diagnose and fix issues when things are on fire, and continuously raise the bar for automation and uptime.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle management</li>\n</ul>\n<ul>\n<li>Build software abstractions that unify multiple clusters and present a seamless interface to training workloads</li>\n</ul>\n<ul>\n<li>Own node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scale</li>\n</ul>\n<ul>\n<li>Improve operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cycles</li>\n</ul>\n<ul>\n<li>Integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure</li>\n</ul>\n<ul>\n<li>Develop monitoring and observability systems to detect issues early and keep clusters stable under extreme load</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environments</li>\n</ul>\n<ul>\n<li>Bring strong programming or scripting skills (Python, Go, or similar) and familiarity with Infrastructure-as-Code tools such as Terraform or CloudFormation</li>\n</ul>\n<ul>\n<li>Are comfortable with bare-metal Linux environments, GPU hardware, and large-scale networking</li>\n</ul>\n<ul>\n<li>Enjoy solving fast-moving, high-impact operational problems and building automation to eliminate manual work</li>\n</ul>\n<ul>\n<li>Can balance careful engineering with the urgency of keeping mission-critical systems running</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Experience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environments</li>\n</ul>\n<ul>\n<li>Strong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloads</li>\n</ul>\n<ul>\n<li>Proficiency in cloud infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operations</li>\n</ul>\n<p>_Bonus: background with GPU workloads, firmware management, or high-performance computing_</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9f9ededf-ecb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/770d5c3f-4e72-4b49-aec4-d444e8ad7a64","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $490K • Offers Equity","x-skills-required":["Kubernetes","Python","Go","Terraform","CloudFormation","Linux","GPU hardware","Large-scale networking"],"x-skills-preferred":["Infrastructure-as-Code","Cloud infrastructure concepts","Containerized workloads","Distributed systems engineering"],"datePosted":"2026-03-06T18:30:45.275Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Python, Go, Terraform, CloudFormation, Linux, GPU hardware, Large-scale networking, Infrastructure-as-Code, Cloud infrastructure concepts, Containerized workloads, Distributed systems engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_43ec3483-7d3"},"title":"Software Engineer, Fleet Infrastructure","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Software Engineer, Fleet Infrastructure</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>Job Description</strong></p>\n<p>This role will support the fleet infrastructure team at OpenAI. The fleet team focuses on running the world’s largest, most reliable, and frictionless GPU fleet to support OpenAI’s general purpose model training and deployment. Work on this team ranges from</p>\n<ul>\n<li>Maximizing GPUs doing useful work by building user-friendly scheduling and quota systems</li>\n</ul>\n<ul>\n<li>Running a reliable and low maintenance platform by building push-button automation for kubernetes cluster provisioning and upgrades</li>\n</ul>\n<ul>\n<li>Supporting research workflows with service frameworks and deployment systems</li>\n</ul>\n<ul>\n<li>Ensuring fast model startup times though high performance snapshot delivery across blob storage down to hardware caching</li>\n</ul>\n<ul>\n<li>Much more!</li>\n</ul>\n<p><strong>About the Role</strong></p>\n<p>As an engineer within Fleet infrastructure, you will design, write, deploy, and operate infrastructure systems for model deployment and training on one of the world’s largest GPU fleet. The scale is immense, the timelines are tight, and the organization is moving fast; this is an opportunity to shape a critical system in support of OpenAI&#39;s mission to advance AI capabilities responsibly.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, implement and operate components of our compute fleet including job scheduling, cluster management, snapshot delivery, and CI/CD systems.</li>\n</ul>\n<ul>\n<li>Interface with researchers and product teams to understand workload requirements</li>\n</ul>\n<ul>\n<li>Collaborate with hardware, infrastructure, and business teams to provide a high utilization and high reliability service</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience with hyperscale compute systems</li>\n</ul>\n<ul>\n<li>Possess strong programming skills</li>\n</ul>\n<ul>\n<li>Have experience working in public clouds (especially Azure)</li>\n</ul>\n<ul>\n<li>Have experience working in Kubernetes</li>\n</ul>\n<ul>\n<li>Execution focused mentality paired with a rigorous focus on user requirements</li>\n</ul>\n<ul>\n<li>As a bonus, have an understanding of AI/ML workloads</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_43ec3483-7d3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/a58add97-1968-4d5c-b504-ab62bea12df3","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $490K • Offers Equity","x-skills-required":["hyperscale compute systems","programming skills","public clouds (especially Azure)","Kubernetes","execution focused mentality","AI/ML workloads"],"x-skills-preferred":["hyperscale compute systems","programming skills","public clouds (especially Azure)","Kubernetes","execution focused mentality","AI/ML workloads"],"datePosted":"2026-03-06T18:28:45.114Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"hyperscale compute systems, programming skills, public clouds (especially Azure), Kubernetes, execution focused mentality, AI/ML workloads, hyperscale compute systems, programming skills, public clouds (especially Azure), Kubernetes, execution focused mentality, AI/ML workloads","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7c0b682d-d0b"},"title":"Senior Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer at their Beijing office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking an expert Senior GPU Engineer to join our AI Infrastructure team. In this role, you will architect and optimize the core inference engine that powers our large-scale AI models. You will be responsible for pushing the boundaries of hardware performance, reducing latency, and maximizing throughput for Generative AI and Deep Learning workloads. You will work at the intersection of Deep Learning algorithms and low-level hardware, designing custom operators and building a highly efficient training/inference execution engine from the ground up.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Custom Operator Development: Design and implement highly optimized GPU kernels (CUDA/Triton) for critical deep learning operations (e.g., FlashAttention, GEMM, LayerNorm) to outperform standard libraries.</li>\n<li>Inference Engine Architecture: Contribute to the development of our high-performance inference engine, focusing on graph optimizations, operator fusion, and dynamic memory management (e.g., KV Cache optimization).</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Expertise in the CUDA programming model and NVIDIA GPU architectures (specifically Ampere/Hopper).</li>\n<li>Deep understanding of the memory hierarchy (Shared Memory, L2 cache, Registers), warp-level primitives, occupancy optimization, and bank conflict resolution.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proven ability to navigate and modify complex, large-scale codebases (e.g., PyTorch internals, Linux kernel).</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Starting January 26, 2026, Microsoft AI employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7c0b682d-d0b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-17/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","CUDA","Triton","PyTorch","Linux"],"x-skills-preferred":["CMake","pybind11","CI/CD","GPU workloads"],"datePosted":"2026-03-06T07:25:46.472Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Beijing"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, CUDA, Triton, PyTorch, Linux, CMake, pybind11, CI/CD, GPU workloads"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b3a14ec-b2e"},"title":"Member of Technical Staff - Engineering Manager, Copilot Memory and Personalization","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Engineering Manager, Copilot Memory and Personalization at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Engineering Manager, you will build and lead a team of backend and machine learning engineers, including driving project planning, prioritization of work, and designing features. You will guide teams and leads identification of dependencies and the development of design documents for a product, application, service, or platform. You will make hands-on contributions to the codebase and infrastructure. You will guide architecture and design efforts by leading discussions, creating proposals and design documents, and ensuring solutions meet business, security, and compliance requirements. You will ship AI powered experiences that will shape how millions of people will interact with AI in the future. You will drive implementation of features and systems, breaking down long-term goals into clear milestones, aligning with release plans, and ensuring cross-team coordination.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Build and lead a team of backend and machine learning engineers, including driving project planning, prioritization of work, and designing features.</li>\n<li>Guide teams and leads identification of dependencies and the development of design documents for a product, application, service, or platform.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proven experience building large-scale distributed systems and optimizing workloads for efficiency and scalability.</li>\n<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in AI.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong leadership and management skills.</li>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b3a14ec-b2e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-engineering-manager-copilot-memory-and-personalization/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","large-scale distributed systems","optimizing workloads","AI technologies"],"x-skills-preferred":["machine learning","backend engineering","leadership","management","communication","collaboration"],"datePosted":"2026-03-05T19:50:27.189Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, large-scale distributed systems, optimizing workloads, AI technologies, machine learning, backend engineering, leadership, management, communication, collaboration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a507bee2-c87"},"title":"Quality Inspector","description":"<p>We&#39;re now looking to expand our Quality Control Department with the addition of a meticulous and dedicated Quality Inspector. This is a key role in ensuring that the components and assemblies we produce and procure meet the exacting standards demanded by world-class competition.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Perform dimensional and visual inspections of mechanical components.</p>\n<ul>\n<li>Interpret and work from detailed engineering drawings, CAD data, and technical documentation.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Understanding of high-performance manufacturing standards and tolerances.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a507bee2-c87","directApply":true,"hiringOrganization":{"@type":"Organization","name":"M-Sport","sameAs":"https://www.m-sport.co.uk","logo":"https://logos.yubhub.co/m-sport.co.uk.png"},"x-apply-url":"https://www.m-sport.co.uk/quality-inspector-wqc251215","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Understanding of high-performance manufacturing standards and tolerances","Hands-on experience with CMM (ideally both manual and programmable) and conventional inspection equipment","Able to read and interpret engineering drawings and specifications","Highly organised with meticulous attention to detail","Computer literate with strong MS Office skills","Skilled in multitasking and prioritising workloads under pressure","Excellent communication skills, and a keen eye for problem solving"],"x-skills-preferred":["Previous experience in a quality inspection or precision engineering role, ideally within motorsport or automotive"],"datePosted":"2025-12-20T09:15:37.198Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brackley"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Understanding of high-performance manufacturing standards and tolerances, Hands-on experience with CMM (ideally both manual and programmable) and conventional inspection equipment, Able to read and interpret engineering drawings and specifications, Highly organised with meticulous attention to detail, Computer literate with strong MS Office skills, Skilled in multitasking and prioritising workloads under pressure, Excellent communication skills, and a keen eye for problem solving, Previous experience in a quality inspection or precision engineering role, ideally within motorsport or automotive"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_317e2da0-af4"},"title":"Machine Shop Supervisor","description":"<p>Our in-house Machine Shop is a critical part of our success – delivering high-performance components for our competition cars and development projects. We’re now looking for a Machine Shop Supervisor to lead and develop this essential function.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As the Machine Shop Supervisor, you will lead and manage our team of skilled Machinists to ensure efficient, high-quality production in a safe working environment. Responsible for overseeing the day-to-day operations, scheduling workloads, maintaining equipment, and ensuring compliance with safety and quality standards.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong interpretation of engineering drawings.</li>\n<li>Skilled in multitasking and prioritising workloads under pressure.</li>\n<li>Excellent communication skills, and a keen eye for problem solving.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_317e2da0-af4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"M-Sport","sameAs":"https://www.m-sport.co.uk","logo":"https://logos.yubhub.co/m-sport.co.uk.png"},"x-apply-url":"https://www.m-sport.co.uk/machine-shop-supervisor-wmch251212","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Strong interpretation of engineering drawings","Skilled in multitasking and prioritising workloads under pressure","Excellent communication skills"],"x-skills-preferred":["Two years proven leadership experience in a machine shop environment","Confident in programming HyperMill and Heidenhain controls"],"datePosted":"2025-12-20T09:14:49.777Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brackley"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Strong interpretation of engineering drawings, Skilled in multitasking and prioritising workloads under pressure, Excellent communication skills, Two years proven leadership experience in a machine shop environment, Confident in programming HyperMill and Heidenhain controls"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_48a2e456-8b9"},"title":"Rally Technician","description":"<p>We&#39;re looking for a self-motivated team player with a positive and enthusiastic attitude. The ideal candidate will be educated to technician level and have a thorough understanding of motor vehicles and their systems. Highly organised with meticulous attention to detail. Skilled in multitasking and prioritising workloads under pressure. Excellent communication skills, and a keen eye for problem solving. Ability to work both independently and collaboratively as part of a multi-disciplinary team. Willingness to travel globally and work across events and test programmes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_48a2e456-8b9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"M-Sport UK","sameAs":"https://www.m-sport.co.uk","logo":"https://logos.yubhub.co/m-sport.co.uk.png"},"x-apply-url":"https://www.m-sport.co.uk/rally-technicians-wwor251215","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["previous experience in a motorsport environment","thorough understanding of motor vehicles and their systems","highly organised with meticulous attention to detail","skilled in multitasking and prioritising workloads under pressure","excellent communication skills"],"x-skills-preferred":["previous experience in a rally, rallycross, or racing environment"],"datePosted":"2025-12-20T09:14:17.585Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brackley"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"previous experience in a motorsport environment, thorough understanding of motor vehicles and their systems, highly organised with meticulous attention to detail, skilled in multitasking and prioritising workloads under pressure, excellent communication skills, previous experience in a rally, rallycross, or racing environment"}]}