<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>07c95966-8e7</externalid>
      <Title>Backend Developer - Host Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Backend Developer and become part of the team that brings new vacation rental properties to life on Holidu.</p>
<p>You&#39;ll be working at the heart of our property acquisition engine , where we take hosts from their very first sign-up all the way to their first booking, making that journey as fast and seamless as possible.</p>
<p>This team sits at a uniquely strategic intersection of product and growth. You will build and optimize the systems that every new host flows through: from onboarding and listing creation, to property configuration, content quality, and referral programs.</p>
<p>The work demands reliability and attention to detail , because the time between a host signing up and welcoming their first guest, and how well their property performs from day one, is directly shaped by the quality of what you build.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and contribute to shaping the team&#39;s direction as you grow.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Early experience delivering product impact through engineering , you&#39;ve shipped things that real users depend on.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS, Kubernetes, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, connecting hosts with millions of guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2589679</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64bb6566-575</externalid>
      <Title>Senior ‘Developer Infrastructure’ Engineer</Title>
      <Description><![CDATA[<p>The GALAXY Platform Execution &amp; Exchange Data (SPEED) Team is a core part of Millennium&#39;s technology organisation, powering the firm&#39;s lowest-latency solutions for systematic and high-frequency trading.</p>
<p>SPEED delivers the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>
<p>As a Senior Developer Infrastructure Engineer on SPEED, you will own and evolve the build and CI/CD infrastructure that underpins these mission-critical systems.</p>
<p>By designing scalable build pipelines, shared tooling, and reliable release workflows, you will directly enhance developer productivity and enable fast, safe iteration on some of the firm&#39;s most performance-sensitive code.</p>
<p>This role offers the opportunity to shape core engineering practices while contributing to platforms that are central to Millennium&#39;s trading edge.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain a highly scalable, parallel, and cached build system for a large, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Own and continually optimise CI/CD pipelines to minimise build/test times, reduce flakiness, and improve developer productivity.</li>
</ul>
<ul>
<li>Operate with an AI-first mindset across the SDLC, using automation by default to streamline build, test, and release workflows.</li>
</ul>
<ul>
<li>Integrate and operationalise AI tools (e.g., copilots, workflow automation, AI-driven analytics) to eliminate manual toil, accelerate development, and codify reusable AI-enabled patterns for the broader engineering organisation.</li>
</ul>
<ul>
<li>Design and operate containerised environments (e.g., Docker, Kubernetes) to maximise utilisation, reliability, and scalability across environments.</li>
</ul>
<ul>
<li>Implement and manage artifact storage, dependency management, and versioning strategies for large, distributed systems.</li>
</ul>
<ul>
<li>Develop and maintain shared libraries, CLIs, scripts, and internal platforms that reduce friction and enable self-service for engineers.</li>
</ul>
<ul>
<li>Build and enhance test suites and environment provisioning, leveraging AI and automation where appropriate for smarter checks, triage, and observability.</li>
</ul>
<ul>
<li>Monitor, instrument, and improve the reliability, observability, and performance of build and CI/CD systems using metrics, dashboards, and alerting.</li>
</ul>
<ul>
<li>Partner with trading and engineering teams to understand requirements, remove friction, and champion best practices for building, testing, and releasing software.</li>
</ul>
<p>Qualifications/Skills Required</p>
<ul>
<li>5+ years of software engineering or DevInfra/Platform/DevOps experience, with significant focus on building systems and CI/CD.</li>
</ul>
<ul>
<li>Strong programming skills in one or more languages (e.g., Python, Rust, Go, C++) for automation and tooling.</li>
</ul>
<ul>
<li>Hands-on experience with at least one modern build system (e.g., Bazel, Buck2).</li>
</ul>
<ul>
<li>Solid understanding of source control (Git), branching strategies, and release management.</li>
</ul>
<ul>
<li>Experience with monorepos is a plus.</li>
</ul>
<ul>
<li>Experience scaling build and test infrastructure for growing codebases and teams (parallelization, test sharding, remote execution, caching).</li>
</ul>
<ul>
<li>Experience designing or participating in processes, systems, or playbooks that leverage AI to streamline work rather than needing to add more headcount to the team.</li>
</ul>
<ul>
<li>Familiarity with containers and cloud infrastructure (Docker, Kubernetes, and major cloud providers such as AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Strong communication and collaboration skills; comfortable partnering with multiple teams and driving cross-cutting initiatives.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Rust, Go, C++, Bazel, Buck2, Git, Kubernetes, Docker, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a company that provides equities, quant strategies, and shared services technology.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954695574</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>107cbb3f-b6c</externalid>
      <Title>Production Support Engineer</Title>
      <Description><![CDATA[<p>The Production Support Engineer role is a hands-on, business-facing position that requires understanding how applications support the business, investigating functional and data-related issues, and communicating clearly with users under pressure.</p>
<p>The Core Technology Production Support team supports a suite of business-critical financial applications used by Middle Office, Operations, Treasury, and Trading. These platforms are central to the firm&#39;s PnL, risk, cash, trade processing, and regulatory reporting workflows.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>End to end ownership of the production environment</li>
<li>Infrastructure management</li>
<li>Release planning and deployment</li>
<li>Incident and problem management, including root cause analysis</li>
<li>Capacity Planning / BCP Testing</li>
<li>Build strong relationships with development and end-users/clients</li>
<li>Foster the DevOps culture</li>
<li>Focus on client service and delivery</li>
<li>Become the go-to person for your area of responsibility</li>
<li>Build subject matter expertise</li>
<li>Create and maintain high quality documentation and runbooks</li>
<li>Cross train other Support team members</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Electrical Engineering, or a related field.</li>
<li>Minimum 2+ years’ experience supporting an enterprise environment</li>
<li>Must have previous experience supporting business facing applications</li>
<li>Strong scripting skills in one of the following: Python (preferred), PowerShell, Perl, etc.</li>
<li>Excellent SQL skills and knowledge of various database systems</li>
<li>Must be able to run and understand complex queries</li>
<li>Ability to support both Windows and Unix/Linux environments</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience working in a trading environment</li>
<li>Exposure to the following:</li>
</ul>
<ul>
<li>CI/CD (Jenkins/Octopus/Artifactory)</li>
<li>Metrics/KPIs (Datadog/Influx/Tableau)</li>
<li>Kafka</li>
<li>Kubernetes</li>
<li>AI (MCP/Agents)</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, PowerShell, Perl, SQL, Windows, Unix/Linux, CI/CD, Metrics/KPIs, Kafka, Kubernetes, AI</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides financial applications used by Middle Office, Operations, Treasury, and Trading.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755943534669</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7e58f60-5fa</externalid>
      <Title>Software Engineer - Learning Engineering and Data (LEaD) Program</Title>
      <Description><![CDATA[<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>
<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>
<p>Candidate is expected to:</p>
<ul>
<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>
<li>Take part in the development and enhancement of the backend distributed system</li>
<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>2-5 years of experience working with C++, Python, or Java</li>
<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>
<li>Must be comfortable working in both Unix/Linux and Windows environments</li>
<li>Good understanding of various design patterns</li>
<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>
<li>Solid communication skills</li>
<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>
<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>
</ul>
<p>Desirable Skills/Knowledge:</p>
<ul>
<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>
<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>
<li>Hands-on experience building ML and data pipeline architectures</li>
<li>Understanding of distributed messaging systems</li>
<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>
<li>Experience with relational and non-relational database platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master&apos;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>IT LEad Program</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a large global alternative investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953879362</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f7aeee90-9b7</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>Join HSBC and help you stand out in your career. We offer opportunities, support and rewards that will take you further. As an Associate Director, Software Engineering, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, and exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should have strong proficiency in Java (Java 21 preferred), hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud), expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes), solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS), hands-on experience with CI/CD pipelines and DevOps practices, knowledge of database technologies (SQL and NoSQL), payment&#39;s domain experience and clearing scheme experience, excellent problem-solving and communication skills, hands-on experience in both SDLC and Agile methodologies, familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk), and certifications in Java or cloud technologies are a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices architecture, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662228</Applyto>
      <Location>Hyderabad, Telangana, India · Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aee9464f-897</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>We are currently seeking an experienced professional to join our team in the role of a Associate Director, Software Engineering.</p>
<p>In this role, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should meet the following requirements:</p>
<ul>
<li>Strong proficiency in Java (Java 21 preferred).</li>
<li>Hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud).</li>
<li>Expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes).</li>
<li>Solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS).</li>
<li>Hands-on experience with CI/CD pipelines and DevOps practices.</li>
<li>Knowledge of database technologies (SQL and NoSQL).</li>
<li>Payment&#39;s domain experience and clearing scheme experience.</li>
<li>Excellent problem-solving and communication skills.</li>
<li>Hands-on experience in both SDLC and Agile methodologies.</li>
<li>Familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk).</li>
<li>Certifications in Java or cloud technologies are a plus.</li>
</ul>
<p>You&#39;ll achieve more when you join HSBC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662222</Applyto>
      <Location>Bangalore, Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a277a7cc-202</externalid>
      <Title>Staff Frontend Developer - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p><strong>Our Current Itinerary</strong></p>
<p>Are you ready to shape the future of travel tech at scale? We are seeking an exceptional Staff Frontend Developer to drive technical excellence across our entire booking funnel.</p>
<p>We&#39;re among the leading travel tech companies worldwide, growing substantially and sustainably year after year, with a mission to make vacation home booking and hosting decisions stress-free and packed with joy.</p>
<p>Our vibrant team of over 600 talented individuals from 60+ countries shares a passion for cutting-edge technology, constant improvement, and creating exceptional experiences for our 50,000 hosts and 100 million website users each year.</p>
<p><strong>Your Future Team</strong></p>
<p>As a Staff Frontend Engineer, you&#39;ll be the technical authority across all teams in the booking funnel , from the Discovery team&#39;s list pages all the way through the checkout funnel to the Post Booking experience.</p>
<p>You&#39;ll design and implement overarching frontend architecture that scales to handle millions of users, while establishing best practices that elevate the entire engineering department.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Core Technologies: TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR.</li>
<li>Data Infrastructure: DynamoDB, Redis.</li>
<li>Cloud &amp; DevOps: AWS, Kubernetes, Docker, Jenkins, Git.</li>
<li>Monitoring &amp; Analytics: Sentry, ELK, Grafana, Looker, OpsGenie, and internally developed technologies.</li>
</ul>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>Define the technical vision and strategy for the frontend engineers of GX department, aligning with organizational goals and anticipating industry trends.</li>
<li>Architect scalable, high-availability frontend systems serving 1M+ daily users across the entire booking funnel.</li>
<li>Lead the design and implementation of department-wide technical initiatives that impact conversion rates, customer satisfaction, and technical excellence.</li>
</ul>
<p><strong>Cross-Team Collaboration &amp; Influence</strong></p>
<ul>
<li>Partner with Engineering Managers and Department Leaders to shape the technical roadmap.</li>
<li>Contribute to specifications for large-scale projects, organizing parallel workstreams that reassemble into cohesive launches.</li>
</ul>
<p><strong>Technical Excellence &amp; Innovation</strong></p>
<ul>
<li>Establish, iterate on, and enforce engineering best practices (testing, documentation, architecture) department-wide.</li>
<li>Review code and set quality standards that become the gold standard across teams.</li>
</ul>
<p><strong>Mentorship &amp; Knowledge Leadership</strong></p>
<ul>
<li>Mentor senior developers, helping them grow into technical leaders.</li>
<li>Lead department-wide knowledge sharing initiatives and technical workshops.</li>
</ul>
<p><strong>Your Backpack is Filled with</strong></p>
<ul>
<li>8+ years of frontend development experience with deep expertise in JavaScript (ES6+), TypeScript, and ReactJS.</li>
<li>Proven track record of architecting large-scale frontend applications handling millions of users.</li>
<li>Expert-level proficiency with state management, performance optimization, and modern build tools.</li>
</ul>
<p><strong>Leadership &amp; Strategic Thinking</strong></p>
<ul>
<li>Demonstrated ability to define and execute technical strategies at department or company level.</li>
<li>Experience leading cross-functional initiatives and influencing without direct authority.</li>
</ul>
<p><strong>Business &amp; Domain Knowledge</strong></p>
<ul>
<li>Ability to connect technical decisions to business KPIs and department goals.</li>
<li>Experience working closely with product and business stakeholders at all levels.</li>
</ul>
<p><strong>Our Adventure Includes</strong></p>
<ul>
<li>Strategic Impact: Shape the technical direction of a rapidly growing travel tech leader.</li>
<li>Technical Excellence: Work with cutting-edge technologies and influence architectural decisions.</li>
<li>Leadership Growth: Lead initiatives that impact millions of users and mentor the next generation of engineers.</li>
</ul>
<p><strong>Want to Travel with Us?</strong></p>
<p>Take a peek into our culture on Instagram @lifeatholidu and check out Tech at Holidu to meet the people behind the product.</p>
<p>Apply now and let’s make vacation dreams come true – at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>JavaScript, TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR, DynamoDB, Redis, AWS, Kubernetes, Docker, Jenkins, Git, Sentry, ELK, Grafana, Looker, OpsGenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading travel tech company that provides vacation home booking and hosting services. It has a team of over 600 individuals from 60+ countries.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2247550</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f6deb282-e3c</externalid>
      <Title>Senior Backend Developer (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Senior Backend Developer and become part of the team that powers how our hosts&#39; vacation rentals reach the world.</p>
<p>You&#39;ll be working at the core of our distribution engine - where we take tens of thousands of homes and make them bookable on major travel platforms such as Holidu, Booking.com, Airbnb, VRBO, HomeToGo, and Check24.</p>
<p>This team operates in one of the most technically dynamic areas of our product. You will work with systems that synchronize large volumes of updates at high speed and maintain high availability, while integrating with a wide variety of partner APIs - each with its own structure and complexity.</p>
<p>It&#39;s work that demands precision, scalability, and smart engineering decisions, and it plays a crucial role in helping our hosts reach millions of guests worldwide.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and actively shape the team&#39;s direction , not just execute on it.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
<li>Ensure our applications are highly scalable, capable of handling tens of thousands of properties and millions of bookings.</li>
<li>Work with data persistence - whether in PostgreSQL, Redis, S3, or new state-of-the-art technologies you help us evaluate.</li>
<li>Ship to production daily , deploying to our AWS Kubernetes cluster is part of the routine, not a special occasion.</li>
<li>Own the reliability of your services , set up monitoring, define SLOs, and drive incident resolution so your team can move fast with confidence.</li>
<li>Collaborate in a supportive, cross-functional team that values knowledge sharing and improving together.</li>
<li>Apply engineering best practices, and stay curious by experimenting with new technologies.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Proven track record of delivering product impact through engineering , not just building services, but solving real problems for users.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS-hosted Kubernetes cluster, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that powers how vacation rentals reach the world, with tens of thousands of homes bookable on major travel platforms.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2573674</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6831d5f-7e9</externalid>
      <Title>Principal AI Ops Architect, GPS</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for national LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Principal AI Ops Architect, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.</li>
<li>Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.</li>
<li>Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.</li>
<li>Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.</li>
<li>Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.</li>
<li>Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.</li>
<li>Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.</li>
<li>Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.</li>
<li>System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.</li>
<li>Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.</li>
<li>Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.</li>
<li>Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.</li>
<li>Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a leading AI company</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p><strong>About Us</strong></p>
<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, Machine Learning, Cloud Computing, Kubernetes, Vector Databases, Agentic Development, LLM Observability Tools, System Architecture, Global Government Security Standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4671740005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d48ddb1-b45</externalid>
      <Title>Mission Software Engineering Manager, Public Sector</Title>
      <Description><![CDATA[<p>We are looking for a Mission Software Engineering Manager to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Recruit a high-performing engineering team.</li>
<li>Drive engineering productivity. Provide guidance, mentorship, and technical leadership to a team of engineers working on Generative AI projects.</li>
<li>Collaborating with cross-functional teams to define, design, and execute strategic roadmap.</li>
<li>Work directly with customers to understand their problems and translate those into features in Scale’s platform.</li>
<li>Be open to ~25% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer’s data to Scale’s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation</li>
<li>2+ years of prior engineering management or equivalent experience and has managed an engineering team.</li>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles</li>
<li>Understand unique DoD and USG constraints when it comes to technology</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$273,700-$341,550 USD</Salaryrange>
      <Skills>Python, JavaScript, Cloud-Native Technologies, Linux, Networking, Data Engineering, Problem Solving, Node, React, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4631039005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>978310df-422</externalid>
      <Title>Staff FullStack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our International Public Sector team. As a Full Stack Software Engineer, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You will serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation.</p>
<p>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes.</p>
<p>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments.</p>
<p>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases.</p>
<p>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture.</p>
<p>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team.</p>
<p>Ideally, you&#39;d have a Master&#39;s or PhD in Computer Science or equivalent deep industry experience in architecting complex, distributed systems.</p>
<p>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP).</p>
<p>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability.</p>
<p>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments.</p>
<p>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization.</p>
<p>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction.</p>
<p>Proficient in Arabic.</p>
<p>Nice to haves include past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role, experience working cross functionally with operations, and a proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, LLMs, Cloud computing, Security, Scalability, Distributed systems, Arabic, Startup experience, CTO experience, Founding engineer experience, Forward deployed engineer experience, Customer engineer experience, Operations experience, LLM-driven solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673314005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3cc878fa-5d1</externalid>
      <Title>Infrastructure Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our core infrastructure in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers</li>
<li>Implementing custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Collaborating with platform, product teams and our customers directly to develop and implement innovative infrastructure that scales to meet evolving needs</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
<li>Excellent communication skills with the ability to explain technical concepts to both technical and non-technical audiences</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, GCP, Azure, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4665557005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61e346b2-915</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>
<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be1fd1e-8f3</externalid>
      <Title>Principal Architect</Title>
      <Description><![CDATA[<p>We are seeking a Principal Architect to drive the design, development, and deployment of our agentic AI products in a fast-paced, collaborative environment. In this role, you will lead a team of 50+ engineers, providing both strategic and technical guidance. You’ll be responsible for high-impact architectural decisions, cross-company collaboration, and executive level engagements.</p>
<p>Key Responsibilities: Lead and mentor a high-performing engineering team of 50+, fostering a culture of technical excellence and ownership. Guide your team through complex challenges involving LLMs, AI agents, and large-scale distributed systems. Represent Scale AI in high-stakes negotiations and strategic discussions with senior external partners, demonstrating strong technical competence and credibility. Develop and communicate a compelling vision for Scale AI’s technology applied to your program. Provide regular updates to senior leadership and key stakeholders on progress, risks, and opportunities. Foster a culture of speed, unity of purpose, resilience, and teamwork.</p>
<p>Requirements: 10+ years of software engineering experience, including 5+ years in a technical leadership or staff role. Deep understanding of modern AI/ML technologies, including experience working with LLMs and AI agents. Proficient in one or more modern programming languages (Python, JavaScript/TypeScript). Hands-on experience with Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Strong product and business sense, with a track record of aligning engineering efforts with company goals. Ability to operate effectively in ambiguous, fast-changing environments and guide your team to do the same. Experience in executive level engagement with industry partners and Public Sector customers</p>
<p>Success Metrics: Within 6 months: Successful demonstration of agentic AI’s mission value in high-stakes customer demonstrations Establish Scale AI as the preferred agentic AI partner for the PEO Establish high velocity, agile engineering cadence both internally and with our industry partners</p>
<p>Within 12–18 months: Secure follow-on contract award with expanded scope for Scale Position Scale AI as the global AI leader in this mission area Establish developed solutions as Scale product offerings to deliver on future contracts</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$257,000-$321,000 USD</Salaryrange>
      <Skills>software engineering, technical leadership, AI/ML technologies, LLMs, AI agents, Kubernetes, cloud infrastructure, Python, JavaScript/TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599202005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a400e696-2d2</externalid>
      <Title>Staff Software Engineer, Enterprise GenAI</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. As a Staff Software Engineer, you will own large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
<li>Being able, and willing, to multi-task and learn new technologies quickly</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>7+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$248,400-$310,500 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops and provides AI systems for critical decision-making. It offers products and technologies for building, deploying, and overseeing AI applications.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4569678005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd00b53a-6fa</externalid>
      <Title>Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are seeking a strong engineer to join our team and help us build and scale our product in a fast-paced environment. The ideal candidate will have a strong understanding of software engineering principles and practices, as well as experience with large-scale distributed systems.</p>
<p>You will be responsible for owning large new areas within our product, working across backend, frontend, and interacting with LLMs and ML models. You will solve hard engineering problems in scalability and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning large new areas within our product</li>
<li>Working across backend, frontend, and interacting with LLMs and ML models</li>
<li>Delivering experiments at a high velocity and level of quality to engage our customers</li>
<li>Working across the entire product lifecycle from conceptualization through production</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation</li>
<li>Experience scaling products at hyper growth startups</li>
<li>Experience tinkering with or productizing LLMs, vector databases, and the other latest AI technologies</li>
<li>Proficient in Python or Javascript/Typescript, and SQL</li>
<li>Experience with Kubernetes</li>
<li>Experience with major cloud providers (AWS, Azure, GCP)</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>Python, Javascript/Typescript, SQL, Kubernetes, AWS, Azure, GCP, LLMs, vector databases, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4513943005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>901202b0-bfa</externalid>
      <Title>Product Security Engineer - Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>
<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>
<p>You will:</p>
<ul>
<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>
<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>
<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>
<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>
<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>
<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>
<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>
<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>
<li>Strong understanding of modern Javascript application design.</li>
<li>Production experience with Kubernetes backed services</li>
<li>Hands-on experience with SAST and DAST tools and methodologies.</li>
<li>Familiarity with terraform orchestration for infrastructure management.</li>
<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>
<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>
<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>
<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>The base salary range for this full-time position in the location of Washington DC/Hawaii is: $205,700-$257,400 USD</p>
<p>The base salary range for this full-time position in the location of St. Louis/Suffolk is: $171,600-$214,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205,700-$257,400 USD (Washington DC/Hawaii), $171,600-$214,500 USD (St. Louis/Suffolk)</Salaryrange>
      <Skills>TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, terraform orchestration, NodeJS, modern Javascript application design, Kubernetes backed services, SAST and DAST tools and methodologies, terraform orchestration for infrastructure management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4651559005</Applyto>
      <Location>St. Louis, MO; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>13667989-d19</externalid>
      <Title>Staff Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Platform Engineering team. As a key member of our team, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices.</p>
<p>In this role, you will:</p>
<ul>
<li>Define next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Drive the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentor software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identify opportunities and drive improvements to software development practices, processes, tools, and languages.</li>
<li>Present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>Show a track record of independent ownership of successful engineering projects.</li>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS, CircleCI, Helm, ArgoCD</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4518088005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14499a71-fa9</externalid>
      <Title>Software Engineer, Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
</ul>
<ul>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
</ul>
<ul>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
</ul>
<ul>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
</ul>
<ul>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
</ul>
<ul>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
</ul>
<ul>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
</ul>
<ul>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
</ul>
<ul>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
</ul>
<ul>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
</ul>
<ul>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
</ul>
<ul>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
</ul>
<ul>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4536653005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43952002-812</externalid>
      <Title>Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>
<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>A track record of independent ownership of successful engineering projects.</li>
<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676936005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d4c3fc5-2ed</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b255adba-bf4</externalid>
      <Title>Field Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Field Engineer to join our Public Sector team. As a Field Engineer, you will be on the front lines of our field engineering efforts for our federal AI projects, working closely with our largest public sector customers to ensure seamless and optimized experiences with Scale&#39;s technology.</p>
<p>Your primary responsibilities will include implementing end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back, and working closely with our customer&#39;s engineering teams to optimize data pipelines. You will also design, develop and maintain playbooks, internal tools, Scale&#39;s documentation and SDKs to quickly get customers set up for long-term success.</p>
<p>In addition, you will partner with Software Engineers and Operations to remove any technical hurdles customers may face, debug technical issues impacting delivery and own technical escalations coming from the customer. You will be accountable for the customer&#39;s technical experience throughout their time with Scale.</p>
<p>The ideal candidate will have a track record of success as a hybrid customer-facing engineer or similar function, wearing multiple hats along the way. Prior technical hands-on experience working with clients in a pre or post-sales capacity to realize business goals is also required.</p>
<p>We offer a competitive compensation package, including base salary, equity, and benefits. The base salary range for this full-time position is $190,000-$290,000 USD in San Francisco, New York, and Seattle, $170,000-$260,000 USD in Hawaii, Washington DC, Texas, and Colorado, and $140,000-$220,000 USD in St. Louis.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000-$290,000 USD in San Francisco, New York, and Seattle, $170,000-$260,000 USD in Hawaii, Washington DC, Texas, and Colorado, and $140,000-$220,000 USD in St. Louis</Salaryrange>
      <Skills>Python, JavaScript, API integrations, Large Language Models, 2D Image Annotation, Container orchestration with Kubernetes, Helm charts for application deployment, Ansible or similar tools for automation, Experience in AI, Experience working in classified environments, Previous experience as a technical go-to-market resource, Understanding of DevSecOps principles</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4518690005</Applyto>
      <Location>San Francisco, CA; New York, NY; Honolulu, Hawaii, St. Louis, MO; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859cb1cf-b9c</externalid>
      <Title>Senior AI Infrastructure Engineer, Model Serving Platform</Title>
      <Description><![CDATA[<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>
<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>
<li>Build an internal platform to empower LLM capability discovery.</li>
<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>
<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>
<li>Develop monitoring and observability solutions to ensure system health and performance.</li>
<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>5+ years of experience building large-scale, high-performance backend systems.</li>
<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>
<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>
<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>
<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>
<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>
<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4520320005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ff592ac-9d8</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Inference team, responsible for building and maintaining critical systems that serve Claude to millions of users worldwide. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models.</p>
<p>As a Senior Software Engineer, you will be responsible for designing, implementing, and deploying large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing. You will work closely with our research team to develop new inference features and integrate new AI accelerator platforms.</p>
<p>To succeed in this role, you should have significant software engineering experience, particularly with distributed systems, and be results-oriented with a bias towards flexibility and impact. You should also be able to pick up slack, even if it goes outside your job description, and thrive in environments where technical excellence directly drives both business results and research breakthroughs.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing</li>
<li>Work closely with our research team to develop new inference features and integrate new AI accelerator platforms</li>
<li>Collaborate with cross-functional teams to ensure seamless deployment and operation of our systems</li>
<li>Analyze observability data to tune performance based on real-world production workloads</li>
<li>Manage multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree or equivalent combination of education, training, and/or experience</li>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Thrives in environments where technical excellence directly drives both business results and research breakthroughs</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Familiarity with machine learning systems and infrastructure</li>
<li>Strong communication and collaboration skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>Distributed systems, Kubernetes, Cloud infrastructure, Machine learning systems, Infrastructure engineering, Python, Rust, Java, C++, Go</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f117ca6-268</externalid>
      <Title>Senior Technical Consultant - ElasticSearch</Title>
      <Description><![CDATA[<p>As a Sr. Technical Consultant – Search, you will play a pivotal role in helping our customers realise the value of Elastic&#39;s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic&#39;s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Search platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack</li>
<li>Leading end-to-end delivery of customer engagements – from discovery and design through implementation, enablement, and optimisation</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement</li>
</ul>
<p>The ideal candidate will have 5+ years of experience as a consultant, engineer, or architect with deep expertise in Enterprise Search technologies, including Elasticsearch and related search platforms. They will also have hands-on experience designing and deploying search solutions, proficiency in at least one programming language, and knowledge of distributed search systems and large-scale infrastructure.</p>
<p>The role offers a competitive salary range of $110,900-$175,500 USD, with opportunities for growth and professional development in a dynamic and distributed company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$110,900-$175,500 USD</Salaryrange>
      <Skills>Elasticsearch, Enterprise Search, Search Architecture, Distributed Search Systems, Large-Scale Infrastructure, Programming Language, Cloud Platforms, Lucene, Databases, Linux, Java, Docker, Kubernetes, DevOps Practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a search and analytics platform for various industries.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7411526</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>004e36d1-008</externalid>
      <Title>Senior Staff Software Engineer - Security Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Senior Staff Software Engineer to join our Security Engineering team. As a member of this team, you will be responsible for creating the vision and defining the strategy for security infrastructure. Your impact will be significant, as you will make Databricks safer for our customers by identifying and plugging key gaps in our infrastructure and services.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Identifying and plugging key gaps in our infrastructure and services to make Databricks safer for our customers</li>
<li>Attracting top talent from across the industry</li>
<li>Representing the security engineering discipline throughout the organization</li>
<li>Representing Databricks at academic and industry conferences and events</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>9+ years of experience in Data Security or related areas and expertise in two or more of the following: Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</li>
<li>15+ years of experience building large-scale distributed systems with high availability</li>
<li>Leadership skills and experience to lead across functional and organizational lines</li>
<li>Strong communication skills to explain and evangelize Data Security to senior leaders across the company</li>
<li>Bias to action and passion for delivering high-quality solutions</li>
<li>MS or Ph.D. in Computer Science or related fields</li>
</ul>
<p>We offer a competitive salary range of $217,200-$288,400 USD, as well as eligibility for annual performance bonus, equity, and comprehensive benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$217,200-$288,400 USD</Salaryrange>
      <Skills>Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7274902002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4daeb1d2-f04</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our team in Vancouver. As a fullstack software engineer, you will work with your team and product management to make insights from data simple. You&#39;ll set the foundation for how we build robust, scalable, and delightful products.</p>
<p>Our customers increasingly use Databricks to analyze petabyte-scale logs in real time. This creates new challenges across the entire data processing pipeline, including ingestion, indexing, processing, and the user experience itself. Our customers are also using Databricks to launch AI/BI, which is redefining Business Intelligence for the AI age. We have several open roles across the teams below:</p>
<ul>
<li>Log Analytics: Our customers increasingly use Databricks to analyze petabyte-scale logs in real time.</li>
<li>AI/BI: AI/BI is redefining Business Intelligence for the AI age.</li>
<li>Unity Catalog Business Semantics: Context is everything for AI. For enterprise data, that context needs to be governed and managed, which is what Unity Catalog Business Semantics offers.</li>
<li>Databricks Apps: Databricks Apps is one of the fastest growing products at Databricks, used by more than 2,500 customers who have created more than 20,000 apps.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience with HTML, CSS, and JavaScript.</li>
<li>Passion for user experience and design and a deep understanding of front-end architecture.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Motivated by delivering customer value.</li>
<li>Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember).</li>
<li>5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go).</li>
<li>Good knowledge of SQL.</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes.</li>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. Canada Pay Range $146,200-$201,100 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099342002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>374022f0-c2a</externalid>
      <Title>Senior Software Engineer, Infrastructure - Platform Compute</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Software Engineer, Infrastructure - Platform Compute to join our team.</p>
<p>As a member of our Platform Product Group, you will be responsible for building a trusted, scalable, and compliant platform to operate with speed, efficiency, and quality.</p>
<p>Our teams build and maintain the platforms critical to the existence of Coinbase.</p>
<p>The Compute team builds and operates the Kubernetes platform at Coinbase, which is the primary compute orchestration infrastructure for services at Coinbase.</p>
<p>You will work towards continuously improving the scalability, reliability, efficiency, and operational experience of using Kubernetes at Coinbase, working closely with the Routing, Security, Reliability, and Observability teams (among many others).</p>
<p>Responsibilities:</p>
<ul>
<li>Build tooling and automation to make management of our Kubernetes clusters easy and reliable.</li>
</ul>
<ul>
<li>Build tooling and automation to improve the developer and operational experience of working with Kubernetes for all users.</li>
</ul>
<ul>
<li>Operationalize our Kubernetes platform so that it continues to be automated and self-healing to prevent unnecessary oncall burden.</li>
</ul>
<ul>
<li>Develop net-new Kubernetes-related capabilities for service owners at Coinbase (e.g. one off jobs, cron, different deployment strategies, support for EFS, automated right sizing).</li>
</ul>
<ul>
<li>Support our customers as they operate critical services for Coinbase in Kubernetes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 5+ years of software engineering experience and experience with Kubernetes, or similar compute orchestration systems (e.g. mesos, nomad)</li>
</ul>
<ul>
<li>Strong AWS and/or GCP infrastructure knowledge</li>
</ul>
<ul>
<li>Ability to build backend services in addition to infrastructure</li>
</ul>
<ul>
<li>Ability to hold a high bar for quality, are a self-starter, and have strong interpersonal skills</li>
</ul>
<ul>
<li>Strong problem-solving skills and ability to identify problems, determine their root cause, and see them through to solution</li>
</ul>
<ul>
<li>Ability to balance business needs with technical solutions</li>
</ul>
<ul>
<li>Has experience scaling backend infrastructure</li>
</ul>
<p>Job #: P74890</p>
<p>*Answers to crypto-related questions may be used to evaluate your on-chain experience.</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$186,065-$218,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$186,065-$218,900 USD</Salaryrange>
      <Skills>Kubernetes, AWS, GCP, Software engineering, Compute orchestration, Automation, Backend services, Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet platform.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7576764</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eb830261-e76</externalid>
      <Title>Senior Software Engineer, Connectivity</Title>
      <Description><![CDATA[<p>We&#39;re looking for a senior engineer with deep experience building robust platforms. As a member of the Connectivity team, you&#39;ll design and own foundational platform systems that support scalable data generation, evaluation, and bespoke customer delivery across Scale&#39;s ecosystem.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing extensible, production-grade services that can support frontier AI workflows, including multi-modal inputs, long-running processes, and agentic orchestration.</li>
<li>Building and operating distributed systems at scale, with strong guarantees around correctness, reliability, observability, and cost efficiency.</li>
<li>Integrating with public LLM APIs and AI services, designing abstractions that are resilient to model churn, latency variability, and evolving usage patterns.</li>
<li>Designing and maintaining data transformation and processing systems, supporting complex schema evolution, customization, and high-throughput workloads.</li>
<li>Partnering closely with infrastructure, product, and customer-facing teams to define requirements, shape technical direction, and deliver seamless integration experiences for customers.</li>
<li>Leading multi-quarter technical initiatives, including authoring and driving a 6+ month technical roadmap for major platform investments.</li>
<li>Applying strong engineering judgment in ambiguous problem spaces, balancing speed with long-term maintainability and operational excellence.</li>
<li>Raising the quality bar through thoughtful system design reviews, rigorous code reviews, and mentorship grounded in real-world production experience.</li>
</ul>
<p>Ideal Experience:</p>
<ul>
<li>7+ years of professional software engineering experience, with a strong background in building and operating large-scale, production-grade platforms.</li>
<li>Deep expertise in distributed systems and cloud-native architectures, including Kubernetes, microservices, event-driven systems, caching, and production databases.</li>
<li>Proven ability to lead multi-quarter technical initiatives, work effectively across cross-functional teams, and apply strong architectural judgment in ambiguous environments.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native architectures, Kubernetes, microservices, event-driven systems, caching, production databases, LLM APIs, AI services, data transformation, processing systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4654275005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a6557b2b-d24</externalid>
      <Title>Senior Platform Engineer II, Compute Services</Title>
      <Description><![CDATA[<p>We are seeking a Senior Platform Engineer to join our Kubernetes Infrastructure team. This role involves administering our critical multi-tenant Kubernetes platforms and collaborating with development teams to establish proper deployment architectures.</p>
<p>The ideal candidate will have a strong background in resilient kubernetes application architecture and deployment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Champion reliability initiatives for Kubernetes application deployments: Advocate for best practices to ensure high availability, scalability, and resilience of applications in Kubernetes, focusing on robust testing, secure pipelines, and efficient resource use.</li>
<li>Administer multi-tenant Kubernetes platforms: Manage complex multi-tenant Kubernetes clusters, configuring access, quotas, and security for isolation and optimal resource allocation while upholding SLAs.</li>
<li>Perform lifecycle and day 2 operations on clusters: Execute Kubernetes cluster lifecycle, including provisioning, patching, monitoring, backup, disaster recovery, and troubleshooting.</li>
<li>Deep dive into reliability issues: Conduct in-depth analysis and root cause identification for complex reliability incidents in Kubernetes, utilizing advanced debugging and monitoring tools to propose preventative measures.</li>
<li>Perform on-call duties: Respond to critical alerts and incidents outside business hours, providing timely resolution to minimize disruptions, collaborating with teams, and communicating clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s in CS, Engineering, or related field, or equivalent experience preferred.</li>
<li>CKA or similar certifications is highly desired.</li>
<li>5+ years administering multi-tenant SAAS Kubernetes (EKS, AKS, GKS).</li>
<li>Strong Gitops/Devops with Argocd or similar helm chart management.</li>
<li>Proven Docker and containerization experience.</li>
<li>Strong Linux OS experience.</li>
<li>Proficient in Go.</li>
<li>Excellent problem-solving, debugging, and analytical skills.</li>
<li>Strong communication and collaboration.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Benefits</strong></p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p><strong>Workplace</strong></p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Gitops/Devops, Argocd, Helm chart management, Docker, Containerization, Linux OS, Go, Problem-solving, Debugging, Analytical skills, Communication, Collaboration, CKA, Performance profiling, Optimization of distributed systems, Network protocols, Distributed consensus algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4607559006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d639959-bd7</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>JOB DESCRIPTION:</p>
<p>About EarnIn</p>
<p>EarnIn is a pioneer of earned wage access, offering financial flexibility to individuals living paycheck to paycheck.</p>
<p>We&#39;re seeking experienced, passionate, and resourceful senior engineers to join our backend teams. As a backend engineer, you will work cross-functionally with various teams and contribute to the design and development of our backend services.</p>
<p>This position will be a hybrid role based in our Bengaluru office, as part of our expanding site presence, with 2 days per week in the office. EarnIn offers excellent benefits for our employees, including healthcare, internet and cell phone reimbursement, a learning and development stipend, and potential opportunities to travel to our headquarters in Mountain View.</p>
<p>Responsibilities</p>
<ul>
<li>Design and implement features robust enough to support our rapid expansion.</li>
</ul>
<ul>
<li>Drive the implementation of new features by breaking complex problems down to their essentials, translating that complexity into elegant design, and creating high-quality, maintainable code.</li>
</ul>
<ul>
<li>Create and maintain test automation to enable continuous integration and development velocity.</li>
</ul>
<ul>
<li>Design &amp; deliver thoughtfully crafted REST APIs to drive the interactions between our client applications and backend services.</li>
</ul>
<ul>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
</ul>
<ul>
<li>Work cross-functionally with other teams (data science, design, product, marketing, analytics).</li>
</ul>
<ul>
<li>Leverage a broad skill set and help us implement and learn new technologies quickly.</li>
</ul>
<ul>
<li>Provide and receive design and implementation evaluations and improve with each iteration.</li>
</ul>
<ul>
<li>Debug production issues across our services infrastructure and multiple levels of our stack.</li>
</ul>
<ul>
<li>Think about distributed systems &amp; services, and care passionately about producing high-quality code.</li>
</ul>
<p>Requirements</p>
<ul>
<li>4+ years of development experience in Software Engineering</li>
</ul>
<ul>
<li>Bachelor&#39;s, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
</ul>
<ul>
<li>Proficient in at least one modern programming language such as C#, Java, Python, Go, and Scala.</li>
</ul>
<ul>
<li>Hands-on experience working with various databases (DynamoDB, MySQL, ElasticSearch) and data pipeline technologies.</li>
</ul>
<ul>
<li>Experience with continuous integration and delivery tools.</li>
</ul>
<ul>
<li>Experienced in developing and executing functional and integration tests.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<ul>
<li>Ability to thrive in a fast-paced, dynamic environment and have a bias towards action and results.</li>
</ul>
<ul>
<li>Experience with Kubernetes, microservices, and event-driven architecture is a strong plus.</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) is a plus</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, Java, Python, Go, Scala, DynamoDB, MySQL, ElasticSearch, continuous integration, delivery tools, functional and integration tests, REST APIs, distributed systems &amp; services, Kubernetes, microservices, event-driven architecture, AI-assisted development tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, offering financial flexibility to individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7542937</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7dc0b69a-5b8</externalid>
      <Title>Senior Engineer, Storage Control Plane</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Storage Engineer to play a key role in designing, building, and operating the control plane for our high-performance AI storage platform. You&#39;ll help evolve CoreWeave&#39;s storage systems by building reliable, scalable, and high-throughput solutions that power some of the largest and innovative AI workloads in the world.</p>
<p>This role involves close collaboration with teams across infrastructure, compute, and platform to ensure our storage services scale automatically and seamlessly while maximizing performance and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and implement a highly scalable multi-tenant control plane that supports CoreWeave&#39;s growing AI storage and cloud infrastructure needs.</li>
<li>Contribute to the development of exabyte-scale, S3-compatible object storage, distributed file system and integrate dedicated storage clusters into diverse customer environments.</li>
<li>Work with technologies such as RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, and distributed filesystems to optimize storage performance and efficiency.</li>
<li>Participate in efforts to improve the reliability, durability, and observability of our storage stack.</li>
<li>Collaborate with operations teams to monitor, analyze, and optimize storage systems using telemetry, metrics, and dashboards to improve performance, latency, and resilience.</li>
<li>Work cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack.</li>
<li>Share your knowledge and mentor other engineers on best practices in building distributed, high-performance systems.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>A Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>6–10 years of experience working in storage systems engineering or infrastructure.</li>
<li>Strong hands-on experience with object storage or distributed filesystems in production environments.</li>
<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar.</li>
<li>Proficiency in a systems programming language such as Go, C, or Rust.</li>
<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana).</li>
<li>Solid understanding of cloud-native infrastructure, Kubernetes, and scalable system architecture.</li>
<li>Strong debugging and problem-solving skills in distributed, high-performance environments.</li>
<li>Clear communicator, able to work collaboratively across teams and share technical insights effectively.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>object storage, distributed filesystems, RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, cloud-native infrastructure, Kubernetes, scalable system architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4611874006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e0058690-78c</externalid>
      <Title>Senior Software Engineer, GenAI Platform</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a large-scale GenAI Platform at Reddit.</p>
<p>The Machine Learning Platform team at Reddit is a high-impact team that owns the infrastructure that powers recommendations, content discovery, user and content quantification, while directly impacting other teams such as Growth, Ads, Feeds, and Core Machine Learning teams.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</li>
<li>Designing and developing ML and Generative AI systems in cloud-based production environments at scale.</li>
<li>Building and managing enterprise-grade RAG applications using embeddings, vector search, and retrieval pipelines.</li>
<li>Implementing and operationalizing agentic AI workflows with tool use using frameworks such as LangChain and LangGraph.</li>
<li>Driving adoption of MLOps / LLMOps practices, including CI/CD automation, versioning, testing, and lifecycle management.</li>
<li>Establishing best practices for observability, monitoring, evaluation, and governance of GenAI pipelines in production.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>5+ years of experience in ML Engineering, AI Platform Engineering, or Cloud AI Deployment roles.</li>
<li>Experience operating orchestration systems such as Kubernetes at scale.</li>
<li>Deep experience with cloud-based technologies for supporting an ML platform, including tools like AWS, Google Cloud Storage, infrastructure-as-code (Terraform), and more.</li>
<li>Proficiency with the common programming languages and frameworks of ML, such as Go, Python, etc.</li>
<li>Excellent communication skills with the ability to articulate technical AI concepts to non-technical stakeholders.</li>
<li>Strong focus on scalability, reliability, performance, and ease of use.</li>
</ul>
<p>Benefits include comprehensive healthcare benefits, income replacement programs, 401k with employer match, global benefit programs, family planning support, gender-affirming care, mental health &amp; coaching benefits, flexible vacation &amp; paid volunteer time off, and generous paid parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,800-$267,100 USD</Salaryrange>
      <Skills>ML Engineering, AI Platform Engineering, Cloud AI Deployment, Kubernetes, AWS, Google Cloud Storage, Terraform, Go, Python, LangChain, LangGraph</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7753480</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5196c4ac-d97</externalid>
      <Title>Senior Software Engineer - Infrastructure and Tools</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>
<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>
<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>
<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6318503002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>acef3d4c-b32</externalid>
      <Title>Security Engineer, Product Security</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>
<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, AWS, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>
<p>You will:</p>
<ul>
<li>Leverage broad product security expertise to build and maintain software tooling that secures every layer of the modern AI/ML software ecosystem.</li>
</ul>
<ul>
<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>
</ul>
<ul>
<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>
</ul>
<ul>
<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>
</ul>
<ul>
<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>
</ul>
<ul>
<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>
</ul>
<ul>
<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>
</ul>
<ul>
<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>
</ul>
<ul>
<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Demonstrated ability to drive multi-month security initiatives independently, from problem definition through execution, without requiring significant direction.</li>
</ul>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
</ul>
<ul>
<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>
</ul>
<ul>
<li>Strong understanding of modern Javascript application design.</li>
</ul>
<ul>
<li>Production experience operating and securing AWS infrastructure at scale.</li>
</ul>
<ul>
<li>Hands-on experience with SAST and DAST tools and methodologies.</li>
</ul>
<ul>
<li>Familiarity with terraform orchestration for infrastructure management.</li>
</ul>
<ul>
<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>
</ul>
<ul>
<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>
</ul>
<ul>
<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>TypeScript, Python, AWS, CI/CD, SAST, DAST, Terraform, NodeJS, Kubernetes, Modern Javascript application design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4643029005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>193a44d6-056</externalid>
      <Title>Staff Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our Global Public Sector team. As a key member of our team, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>You will:</p>
<ul>
<li>Serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation</li>
<li>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes</li>
<li>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments</li>
<li>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases</li>
<li>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture</li>
<li>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Masters or Phd in Computer Science or equivalent deep industry experience in architecting complex, distributed systems</li>
<li>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP)</li>
<li>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability</li>
<li>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments</li>
<li>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization</li>
<li>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction</li>
<li>Proficient in Arabic</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, Machine learning, Distributed systems, Cloud computing, Security, Arabic, LLM-driven solutions, Startup experience, CTO or founding engineer experience, Cross-functional experience with operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676610005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9af8d812-df8</externalid>
      <Title>AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>
<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>
<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>
<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>
<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>
<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>
<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>
<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>
<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>
<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>
<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>
<p>You have hands-on experience with one or more of the following:</p>
<p>Model training (especially transformers and LLMs).</p>
<p>Model inference at scale (again, especially transformers and LLMs).</p>
<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>
<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>
<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>
<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>
<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary, annual bonus and equity</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>
<p>Generous paid time off above statutory minimum</p>
<p>Hybrid working</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p>Fun events for employees, friends, and family!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI company that builds customer service solutions. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7824142</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>230b25df-0f4</externalid>
      <Title>Senior Software Engineer- Database Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our Database Infrastructure team. As a member of this team, you will build and operate large-scale, reliable, and performant data systems using ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</p>
<p>You will collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord. You will exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</p>
<p>You will work with a talented team of engineers who have built one of the largest communication platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and operate large-scale, reliable, and performant data systems with ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</li>
<li>Collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord.</li>
<li>Exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</li>
<li>Work with a talented team of engineers who have built one of the largest communication platforms in the world.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience with building distributed systems and datastore infrastructure.</li>
<li>Experience with highly-available and distributed databases: e.g. ScyllaDB, Cassandra, BigTable, DynamoDB, CockroachDB, Postgres w/HA, etc.</li>
<li>Proficiency with at least one statically-typed programming language: e.g. Rust, Go, Java, C, C++</li>
<li>Strong operating systems, distributed systems, and concurrency control fundamentals.</li>
<li>Familiarity with Linux internals.</li>
<li>Comfortable working in fast-paced environments.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience with Cassandra or Scylla.</li>
<li>Experience with Rust.</li>
<li>Knowledge of DevOps tools like Salt, Terraform, or Kubernetes.</li>
</ul>
<p>Why Discord?</p>
<p>Discord plays a uniquely important role in the future of gaming. We&#39;re a multi-platform, multi-generational, and multiplayer platform that helps people deepen their friendships around games and shared interests.</p>
<p>We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank.</p>
<p>Join us in our mission!</p>
<p>Your future is just a click away!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>ScyllaDB, PostgreSQL, ElasticSearch, Linux, Rust, Distributed systems, Datastore infrastructure, Highly-available and distributed databases, Operating systems, Concurrency control fundamentals, Linux internals, Cassandra, Go, Java, C, C++, DevOps tools, Salt, Terraform, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including playing video games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8200328002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>588dfb0e-611</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>
<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>
<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4557835006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d929542f-ab4</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking experienced senior engineers to join our backend teams. As a backend engineer, you will work cross-functionally with various teams and contribute to the design and development of our backend services.</p>
<p>This position will be a hybrid role based in our Bengaluru office, with 2 days on-site as part of our expanding site. EarnIn provides excellent benefits for our employees, including healthcare, internet/cell phone reimbursement, a learning and development stipend, and potential opportunities to travel to our Palo Alto HQ.</p>
<p>Responsibilities:</p>
<ul>
<li>Design &amp; implement features robust enough for our large scale.</li>
<li>Drive the implementation of new features,break complex problems down to their bare essentials, translate that complexity into elegant design, and create high-quality, maintainable code.</li>
<li>Create and maintain test automation to enable continuous integration and development velocity.</li>
<li>Design &amp; deliver thoughtfully crafted REST APIs to drive the interactions between our client applications and backend services.</li>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
<li>Work cross-functionally with other teams (data science, design, product, marketing, analytics).</li>
<li>Leverage a broad skill set and help us implement and learn new technologies quickly.</li>
<li>Provide and receive design and implementation evaluations and improve with each iteration.</li>
<li>Debug production issues across our services infrastructure and multiple levels of our stack.</li>
<li>Think about distributed systems &amp; services and care passionately about producing high-quality code.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of development experience in Software Engineering</li>
<li>Bachelor&#39;s, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
<li>Proficient in at least one modern programming language such as C#, Java, Python, Go, and Scala.</li>
<li>Hands-on experience working with various databases (DynamoDB, MySQL, ElasticSearch) and data pipeline technologies.</li>
<li>Experience with continuous integration and delivery tools.</li>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs)</li>
<li>Experienced in developing and executing functional and integration tests.</li>
<li>Excellent written and verbal communication skills.</li>
<li>Ability to thrive in a fast-paced, dynamic environment and have a bias towards action and results.</li>
<li>Experience with Kubernetes, microservices, and event-driven architecture is a strong plus.</li>
<li>Experience in payments or fintech is a plus.</li>
<li>Experience with payment processors or internal financial systems is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, Java, Python, Go, Scala, DynamoDB, MySQL, ElasticSearch, continuous integration, delivery tools, AI-assisted development tools, functional and integration tests, Kubernetes, microservices, event-driven architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn offers earned wage access products for individuals living paycheck to paycheck, with a healthy core business and world-class funding partners.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7392234</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>74bd458c-995</externalid>
      <Title>Engineer, Kubernetes Core Interfaces</Title>
      <Description><![CDATA[<p>The Kubernetes Core Interfaces Team at CoreWeave builds the control plane that powers our cloud infrastructure at scale. Our team ensures that the Kubernetes platforms running CoreWeave’s GPU workloads are reliable, fault-tolerant, and easy to operate, providing deep insights and smooth experiences for users and internal teams alike.</p>
<p>As an Engineer on the Kubernetes Core Interfaces Team, you’ll design and implement scalable solutions that simplify administration and enhance the user experience of CoreWeave’s Kubernetes platforms. You will develop Helm charts, custom controllers, API endpoints, and other control plane components, while building automation, dashboards, alerts, and testing frameworks. You’ll participate in the on-call rotation and collaborate closely with your teammates in a supportive, high-performance environment that encourages curiosity, ownership, and personal growth.</p>
<p>Some of what you’ll work on:</p>
<ul>
<li>Design and implement solutions for scale, fault tolerance, and operational simplicity in CoreWeave’s Kubernetes platforms.</li>
</ul>
<ul>
<li>Develop Helm charts, custom controllers, CRDs, gateways, API endpoints, and other Kubernetes control plane components.</li>
</ul>
<ul>
<li>Build deployment automation, monitoring dashboards, and operational insights for Kubernetes services.</li>
</ul>
<ul>
<li>Participate in the team’s on-call rotation, contributing to incident response and operational excellence.</li>
</ul>
<ul>
<li>Collaborate with teammates to share ideas, provide feedback, and grow together in a high-trust environment.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>3+ years of experience in software or infrastructure engineering.</li>
</ul>
<ul>
<li>Experienced in developing fault-tolerant, testable software services, primarily using Go.</li>
</ul>
<ul>
<li>Familiar with Kubernetes concepts and/or experienced administrating Kubernetes clusters.</li>
</ul>
<ul>
<li>Comfortable working with Linux systems, shell scripting, and Linux storage/networking stacks.</li>
</ul>
<ul>
<li>Collaborative, curious, and excited to contribute to a diverse, high-performance team.</li>
</ul>
<p>Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You enjoy solving complex challenges at scale and improving operational workflows.</li>
</ul>
<ul>
<li>You’re curious about cloud infrastructure, Kubernetes, and high-performance systems.</li>
</ul>
<ul>
<li>You thrive in a collaborative environment, sharing knowledge, and learning from teammates.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $109,000 to $160,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
</ul>
<ul>
<li>100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $160,000</Salaryrange>
      <Skills>Go, Kubernetes, Linux, shell scripting, Linux storage/networking stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4656273006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65befd80-0e2</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Staff-level backend software engineer to join our Live Pay team. You&#39;ll work cross-functionally with various teams and contribute to the design and development of key platform services. This person must be strong in JVM languages and event-driven architecture on AWS.</p>
<p>The Canada base salary range for this full-time position is $252,000-$308,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. This role will be hybrid from our Vancouver, CAN office, with 2 days a week in the office required.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the design and implementation of new features. Break down complex problems into their bare essentials, translate this complexity into elegant design, and create high-quality, clean code.</li>
</ul>
<ul>
<li>Make a meaningful impact on the lives of our community members.</li>
</ul>
<ul>
<li>Design, develop, and deliver large-scale systems.</li>
</ul>
<ul>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
</ul>
<ul>
<li>Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success.</li>
</ul>
<ul>
<li>Estimate and manage team project timelines and risks.</li>
</ul>
<ul>
<li>Care passionately about producing high-quality, efficient designs and code.</li>
</ul>
<ul>
<li>Constantly learning about new technologies and industry standards in software engineering.</li>
</ul>
<ul>
<li>Work cross-functionally with other teams, including: Analytics, design, product, marketing, and data science.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of development experience in backend software development</li>
</ul>
<ul>
<li>Bachelor&#39;s, Master’s, or PhD in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
</ul>
<ul>
<li>Proficiency in at least one modern programming language, such as Java, Kotlin, Scala, or C#, and experience with at least one major framework such as Spring, Spring Boot, or ASP.NET Core.</li>
</ul>
<ul>
<li>Hands-on experience working in cloud environments: AWS, GCP, or Azure</li>
</ul>
<ul>
<li>Proficiency in event-driven systems such as Kafka, SQS, SNS, or Kinesis, and experience designing and operating scalable distributed systems.</li>
</ul>
<ul>
<li>Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations</li>
</ul>
<ul>
<li>Hands-on experience working with various databases. DynamoDB, MySQL, ElasticSearch</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) to improve engineering productivity</li>
</ul>
<ul>
<li>Experience with continuous integration and delivery tools, and experience in developing and executing functional and integration tests.</li>
</ul>
<ul>
<li>Familiarity with a clean architecture approach and software craftsmanship</li>
</ul>
<ul>
<li>Experience with Kubernetes and microservice architecture is a strong plus.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$252,000-$308,000</Salaryrange>
      <Skills>Java, Kotlin, Scala, C#, Spring, Spring Boot, ASP.NET Core, AWS, GCP, Azure, Kafka, SQS, SNS, Kinesis, DynamoDB, MySQL, ElasticSearch, AI-assisted development tools, Continuous integration and delivery tools, Clean architecture approach, Software craftsmanship, Kubernetes, Microservice architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, delivering real-time financial flexibility for individuals living paycheck to paycheck. It has a healthy core business with a significant runway.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7680387</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c38cbb6f-4b7</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Inference\n\nLocation: Dublin, IE\n\nDepartment: Software Engineering - Infrastructure\n\nJob Description:\n\nAbout Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\n\nAbout the role:\n\nOur Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.\n\nThe team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.\n\nAs a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.\n\nStrong candidates may also have experience with:\n\n- High-performance, large-scale distributed systems\n\n- Implementing and deploying machine learning systems at scale\n\n- Load balancing, request routing, or traffic management systems\n\n- LLM inference optimization, batching, and caching strategies\n\n- Kubernetes and cloud infrastructure (AWS, GCP)\n\n- Python or Rust\n\nYou may be a good fit if you:\n\n- Have significant software engineering experience, particularly with distributed systems\n\n- Are results-oriented, with a bias towards flexibility and impact\n\n- Pick up slack, even if it goes outside your job description\n\n- Want to learn more about machine learning systems and infrastructure\n\n- Thrive in environments where technical excellence directly drives both business results and research breakthroughs\n\n- Care about the societal impacts of your work\n\nRepresentative projects across the org:\n\n- Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators\n\n- Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads\n\n- Building production-grade deployment pipelines for releasing new models to millions of users\n\n- Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage\n\n- Contributing to new inference features (e.g., structured sampling, prompt caching)\n\n- Supporting inference for new model architectures\n\n- Analyzing observability data to tune performance based on real-world production workloads\n\n- Managing multi-region deployments and geographic routing for global customers\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:€295.000-€355.000 EUR\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\n\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\n\nCome work with us!\n\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€295.000-€355.000 EUR</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5150472008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cb24149-c62</externalid>
      <Title>Principal Software Engineer, Productivity</Title>
      <Description><![CDATA[<p>We are looking for a Principal-level engineer who is passionate about building and evolving the developer productivity ecosystem used by the entire Workflows Engineering organisation.</p>
<p>As a productivity engineer, you&#39;ll work with both our Engineering and Site Reliability teams, owning our developer CLI (Golang) and Kubernetes tooling, automated release processes, and CI/CD systems in CircleCI.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Collaborate with the SRE and Engineering teams to manage, extend, and enhance existing developer productivity and platform tooling for local and remote Kubernetes environments</li>
<li>Own and optimise CI/CD pipelines in CircleCI</li>
<li>Assist in weekly release orchestration</li>
<li>Automate and improve processes via Golang tooling and Okta Workflows</li>
</ul>
<p>Minimum Required Knowledge, Skills, and Abilities:</p>
<ul>
<li>10+ years of deep understanding of software engineering processes, agile framework, tools (e.g.: programming proficiency in a language, preferably Go or similar compiled language), methods, test development, algorithms, and data structures</li>
<li>Experience with Cloud Native Technologies (Kubernetes, ArgoCD, Crossplane, Docker)</li>
<li>Passionate about learning new technical ecosystems</li>
<li>Interested in working with container deployment and orchestration technologies at scale, with familiarity of the fundamentals to include service discovery, deployments, monitoring, scheduling, and load balancing</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with CI/CD Systems (such as CircleCI or Github Actions)</li>
<li>Experience with development and deployment in a hosted cloud environment, preferably AWS</li>
</ul>
<p>Education and Training:</p>
<p>BS, MS, or PhD in Computer Science or related field</p>
<p>The annual base salary range for this position for candidates located in Canada is between $177,000-$265,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$177,000-$265,000 CAD</Salaryrange>
      <Skills>software engineering processes, agile framework, Go, Kubernetes, ArgoCD, Crossplane, Docker, CI/CD Systems, development and deployment in a hosted cloud environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7361555</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ed46937-df6</externalid>
      <Title>Staff Developer Success Engineer - West</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>
<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>
<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>
<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>
<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>
<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>
<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07626e74-020</externalid>
      <Title>Engineering Architect, Identity (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Auth0 secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Software Architect, Identity</strong></p>
<p><strong>The Engineering Architect Team</strong></p>
<p>The Architecture team is a small group of very senior engineers reporting to our VP of Engineering Excellence, working broadly across the organisation in collaboration with Engineering, Product, and Security. We partner deeply with other Engineering teams for large projects, and provide direction and architectural guidance for smaller initiatives. We have a dual-pronged charter to “level up the tech stack and level up the people stack” via both technical contributions and partnerships/mentoring.</p>
<p>In this role, you will have the opportunity to significantly contribute to Auth0’s future technology direction. Through your experience, knowledge of industry trends, and technical abilities you will provide guidance, build proof of concepts, and deliver production software implementations that help Auth0 Engineering teams move faster by using and developing standard patterns and technologies. You will also help advance the engineering culture and help uplevel other engineers. Note that while this role involves a lot of guidance, documentation, and leadership, it also requires substantial hands-on coding and development of both applications and systems.</p>
<p><strong>What you’ll be doing</strong></p>
<ul>
<li>Collaborate with Product, Security, and Engineering teams to define and continually improve Auth0’s technology stack and architecture.</li>
</ul>
<ul>
<li>Foster and lead innovation in the IAM space, with a strong focus on Agentic Identity</li>
</ul>
<ul>
<li>Lead initiatives to enhance, scale, and evolve Auth0’s product offerings.</li>
</ul>
<ul>
<li>Embed within Engineering teams across the organisation for large projects, while providing guidance and lighter touch engagements for smaller initiatives.</li>
</ul>
<ul>
<li>Design, architect, and document large scale distributed systems.</li>
</ul>
<ul>
<li>Lead the development of complex, broadly-scoped functionality in a very large and deep set of services and components.</li>
</ul>
<ul>
<li>Teach by doing: coding, optimising, and troubleshooting Node.js and Go applications in collaboration with feature development teams.</li>
</ul>
<ul>
<li>Implement features and create consistent foundations using technologies such as AWS, Azure, Node.js, Go, MongoDB, Redis, PostgreSQL, Kubernetes.</li>
</ul>
<ul>
<li>Investigate, understand, and resolve bottlenecks in our ability to scale, use resources efficiently, and maintain a 99.99% uptime SLA.</li>
</ul>
<ul>
<li>Drive technical decision making while striving to find the right balance between factors such as simplicity, flexibility, reliability, cost, and performance.</li>
</ul>
<ul>
<li>Participate in “round table” discussions and mentor team members and engineers throughout the organisation to level up our people.</li>
</ul>
<ul>
<li>Participate in our Engineering Leadership Team with other architects, directors, and executives.</li>
</ul>
<ul>
<li>You will join our Incident Commander on-call rotation. Members of our team do periodic on-call rotation for high-severity incidents to help up-level our responses After spending time getting acquainted with our applications, systems, and processes, and getting training to</li>
</ul>
<p><strong>What you’ll bring to the role</strong></p>
<ul>
<li>10+ years of software development experience.</li>
</ul>
<ul>
<li>5+ years of experience working on cloud applications.</li>
</ul>
<ul>
<li>Experience with API-first applications using REST and/or gRPC</li>
</ul>
<ul>
<li>Passion and thorough understanding of what it takes to build and operate secure, reliable systems at scale.</li>
</ul>
<ul>
<li>Knowledge of Identity Protocols such as OAuth, OIDC and SAML.</li>
</ul>
<ul>
<li>Industry knowledge of the Authorization and Authentication spaces.</li>
</ul>
<ul>
<li>Experience in building AI Agents, and/or MCP servers applications.</li>
</ul>
<ul>
<li>Experience with security engineering and application security.</li>
</ul>
<ul>
<li>Very strong written and verbal communication skills with a demonstrated ability to adjust your communication style to the intended audience, whether communicating with senior executives, customers, engineers, or product managers.</li>
</ul>
<ul>
<li>Mastery and deep understanding of hands-on software development building distributed systems.</li>
</ul>
<ul>
<li>Experience with multi-cloud environments and container deployments, particularly Kubernetes in AWS/Azure.</li>
</ul>
<ul>
<li>Prior experience with application performance management, tracing, and performance testing tools.</li>
</ul>
<ul>
<li>Excellence at creating clarity and alignment for technical initiatives.</li>
</ul>
<ul>
<li>Great ability to build trust through collaboration with multiple teams and get consensus on a vision.</li>
</ul>
<ul>
<li>Knowledge of application security and cloud security best practices.</li>
</ul>
<p>And extra credit if you have experience in any of the following!</p>
<ul>
<li>Deep experience in Node.js (Javascript or Typescript), or Golang.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$274,000-$370,000 USD</Salaryrange>
      <Skills>API-first applications, REST, gRPC, OAuth, OIDC, SAML, Authorization, Authentication, AI Agents, MCP servers, Security engineering, Application security, Cloud security best practices, Node.js, Go, AWS, Azure, MongoDB, Redis, PostgreSQL, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7128746</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7de81b4-bec</externalid>
      <Title>Security Engineer, Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Infrastructure Security Engineer to join our team. This role is integral to ensuring the security and integrity of our platform.</p>
<p>You will be responsible for securing large cloud environments, orchestrating and securing various compute clusters, and reviewing infrastructure as code. Your expertise in cloud security, infrastructure automation, and advanced security practices will be essential in maintaining and enhancing our security posture.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Securing infrastructure across large cloud hosting providers (e.g., AWS, Azure, GCP).</li>
<li>Implementing and maintaining robust security configurations and policies for cloud environments.</li>
<li>Conducting regular security assessments and audits of infrastructure to identify vulnerabilities and areas for improvement.</li>
<li>Developing and enforcing security best practices for infrastructure automation and orchestration.</li>
<li>Collaborating with DeveloperExperience, IT, and product teams to integrate security into all stages of the infrastructure lifecycle.</li>
<li>Reviewing and securing infrastructure as code (e.g., Terraform, CloudFormation).</li>
<li>Educating and mentoring team members on infrastructure security best practices and emerging threats.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
<li>Proficiency in NodeJS, TypeScript, and Kubernetes.</li>
<li>Experience with orchestrating and securing GPU clusters.</li>
<li>Proficiency in infrastructure as code tools such as Terraform and CloudFormation.</li>
<li>Excellent communication skills, with the ability to clearly explain technical concepts and their implications to both technical and non-technical stakeholders.</li>
<li>Demonstrated ability to influence security strategies and drive improvements within an organisation.</li>
<li>Relevant security certifications (e.g., AWS Certified Security Specialty, Certified Cloud Security Professional) are a plus.</li>
<li>Experience in a senior or lead security role is preferred.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>cloud security, infrastructure automation, advanced security practices, NodeJS, TypeScript, Kubernetes, Terraform, CloudFormation, orchestrating and securing GPU clusters, relevant security certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4646888005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d799d883-0dd</externalid>
      <Title>Solutions Architect- Networking</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in leading innovation at every turn. You will have the opportunity to demonstrate thought leadership and engage hands-on throughout our customers&#39; entire lifecycle. From establishing their Kubernetes environment to developing proofs of concept, onboarding, and optimizing workloads, you will lead innovation at every turn.</p>
<p>In this role, you will:</p>
<p>Serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on networking technologies within high-performance compute (HPC) environments Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments. Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Networking product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions. Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions. Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption. Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>Who You Are:</p>
<p>B.S. in Computer Science or a related technical discipline, or equivalent experience 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure focusing on building distributed systems or HPC/cloud services, with an expertise focused on infrastructure networking. Fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions Proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences Expertise with a broad range of networking technologies and topics, with a familiarity to understand the needs and use cases is it relates to securing and enabling high performance networking environments. Experience with managing infrastructure networking, Kubernnetes CSI management, and private networking concepts Familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL)</p>
<p>Preferred:</p>
<p>Code contributions to open-source inference frameworks Experience with scripting and automation related to network technologies Experience with building solutions across multi-cloud environments Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>cloud computing, Kubernetes, infrastructure networking, high-performance computing, networking technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), open-source inference frameworks, scripting and automation, multi-cloud environments, latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4568528006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>12eeb115-0aa</externalid>
      <Title>Staff+ Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>
<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>
<li>Define infrastructure architecture, ensuring the hardest problems get solved , whether by you directly or by working through others</li>
<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>
<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience</li>
<li>Led complex, multi-quarter technical initiatives that span multiple teams or systems</li>
<li>Can set technical direction for a team, not just execute within it</li>
<li>Deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>
<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual Salary: $405,000-$485,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, systems language, Python, Rust, Go, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108817008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90423d85-ea7</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>
<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>
<ul>
<li>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources.</li>
<li>Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs.</li>
<li>Seamless onboarding and management for all members of an organisation to become data-driven.</li>
<li>Provide a great SQL-centric data exploration and dashboarding experience on Databricks.</li>
<li>An interactive environment for collaborative data projects at massive scale with an easy path to production.</li>
</ul>
<p>We are looking for engineers with 5+ years of experience with HTML, CSS, and JavaScript, passion for user experience and design, and a deep understanding of front-end architecture. You should be comfortable working towards a multi-year vision with incremental deliverables, motivated by delivering customer value, and experienced with modern JavaScript frameworks and server-side web technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, SQL, Cloud technologies (AWS, Azure, GCP, Docker, or Kubernetes), Modern JavaScript frameworks (React, Angular, or VueJs/Ember), Server-side web technologies (Node.js, Java, Python, Scala, C#, C++, Go)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best Data Intelligence Platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5445641002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32c0c69a-037</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p><strong>About the role:</strong></p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Work end to end on identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research</li>
<li>Collaborate with the team to design and implement solutions to complex problems</li>
<li>Develop and maintain large-scale distributed systems</li>
<li>Implement and deploy machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p><strong>Application Instructions:</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€295.000-€355.000 EUR</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5150472008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ebab2b70-64e</externalid>
      <Title>Senior Staff Software Engineer - Security Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Senior Staff Software Engineer to join our Security Engineering team. As a member of this team, you will be responsible for creating the vision and defining the strategy for security infrastructure. Your impact will be significant, making Databricks safer for our customers by identifying and plugging key gaps in our infrastructure and services. You will also attract top talent from across the industry, represent the security engineering discipline throughout the organization, and represent Databricks at academic and industry conferences and events.</p>
<p>To be successful in this role, you will need 9+ years of experience in data security or related areas, with expertise in two or more of the following: cryptography, Kubernetes security, web security, governance, privacy, trust, safety, authentication, identity management, access control, key management, inter-service authentication, secure application frameworks, and detection and response. You will also need 15+ years of experience building large-scale distributed systems with high availability, leadership skills, strong communication skills, a bias to action, and a passion for delivering high-quality solutions.</p>
<p>In addition to your technical expertise, you will need to have a strong understanding of the company&#39;s goals and objectives, as well as the ability to communicate effectively with stakeholders at all levels of the organization.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$228,400-$303,550 USD</Salaryrange>
      <Skills>Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection and Response</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7274908002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>baad2598-8bc</externalid>
      <Title>Staff / Senior Software Engineer, Compute Capacity</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Accelerator Capacity Engineering (ACE) team manages one of the largest and fastest-growing accelerator fleets in the industry. As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on.</p>
<p><strong>What This Team Owns</strong></p>
<p>The team’s work spans three functional areas: data infrastructure, fleet observability, and compute efficiency. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>
<p><strong>Data Infrastructure</strong></p>
<p>Collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against.</p>
<p><strong>Fleet Observability</strong></p>
<p>Making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation.</p>
<p><strong>Compute Efficiency</strong></p>
<p>Measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery.</li>
<li>Develop and maintain observability infrastructure , Prometheus recording rules, Grafana dashboards, and alerting systems , that surface actionable signals about fleet health, occupancy, and efficiency.</li>
<li>Instrument and analyze compute efficiency metrics across training, inference, and eval workloads.</li>
<li>Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging.</li>
<li>Operate Kubernetes-native systems at scale , deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>
<li>Normalize and reconcile data across heterogeneous sources , including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li>5+ years of software engineering experience with a strong track record building and operating production systems.</li>
<li>Kubernetes fluency at operational depth , you’ve operated production K8s at meaningful scale, not just written manifests.</li>
<li>Data pipeline engineering experience , designing, building, and owning the full lifecycle of production data pipelines.</li>
<li>Observability tooling experience , Prometheus, PromQL, and Grafana are in the critical path for this team.</li>
<li>Python and SQL at production quality.</li>
<li>Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level , compute, billing, usage APIs, cost management tooling.</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Multi-cloud data ingestion experience , especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats.</li>
<li>Accelerator infrastructure familiarity , GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.</li>
<li>Performance engineering and benchmarking experience , building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.</li>
<li>Data-as-product thinking , experience building internal data products with self-service access, schema contracts, API serving, documentation,</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, SQL, Prometheus, Grafana, BigQuery, Cloud computing, Data pipeline engineering, Observability tooling, Multi-cloud data ingestion, Accelerator infrastructure, Performance engineering, Data-as-product thinking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5126702008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bc4518a-7e3</externalid>
      <Title>AI Applications Ops Lead, GPS</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for national LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.</li>
</ul>
<ul>
<li>Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.</li>
</ul>
<ul>
<li>Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.</li>
</ul>
<ul>
<li>Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.</li>
</ul>
<ul>
<li>Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.</li>
</ul>
<ul>
<li>Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.</li>
</ul>
<ul>
<li>Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.</li>
</ul>
<ul>
<li>Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.</li>
</ul>
<ul>
<li>System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.</li>
</ul>
<ul>
<li>Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.</li>
</ul>
<ul>
<li>Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.</li>
</ul>
<ul>
<li>Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.</li>
</ul>
<ul>
<li>Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.</li>
</ul>
<p><strong>About Us</strong></p>
<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Vector databases, Agentic development, LLM observability tools, SRE, FDE, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4654510005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>16599c27-a87</externalid>
      <Title>Senior Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>
<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>
<li>Flexible PTO to take the time you need, when you need it</li>
<li>Paid parental leave for all new parents welcoming a new child</li>
<li>Retirement savings plan to help you plan for the future</li>
<li>Remote work setup budget to help you create a productive home office</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced</li>
<li>In-office meal program and commuter benefits provided for onsite employees</li>
</ul>
<p>Compensation at Cresta:</p>
<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>
<p>OTE Range: $205,000–$270,000 + Offers Equity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$205,000–$270,000</Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center using AI and human intelligence.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5137153008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>611720bf-294</externalid>
      <Title>Senior Application Security Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is a financial platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>
<p>As a Senior Application Security Engineer, you will focus on finding and responding to security vulnerabilities across the Brex platform. In this role, you will perform code reviews, design reviews, penetration testing, and vulnerability management. You will develop and maintain tooling to perform static and dynamic testing of the Brex platform and tooling which supports secure developer workflows.</p>
<p>Application Security is part of our wider Financial Scale organization, which means you will work closely with Security Operations, GRC, Product Security, Front End Platform, IT Infrastructure teams.</p>
<p>We’re looking for individuals with a strong background and interest in penetration testing. You should have a demonstrated ability to find vulnerabilities in complex systems and craft exploits to demonstrate business impact.</p>
<p>This role is highly cross-functional and collaborative, you will have the opportunity to work with every engineering team across Brex.</p>
<p>Building a world-class financial service requires world-class security. Brex is pioneering the next wave of AI-driven financial services for dynamic, high-impact companies like Coinbase, Robinhood, and Anthropic.</p>
<p>Responsibilities</p>
<ul>
<li>Identifying vulnerabilities, demonstrating business impact, and articulating the risk of specific vulnerabilities to drive prioritization efforts</li>
</ul>
<ul>
<li>Perform penetration testing and design reviews, looking for vulnerabilities and insecure designs, work with engineering and product to design secure product features</li>
</ul>
<ul>
<li>Maintain and build internal tools to automate security efforts, perform SAST and DAST testing of the Brex platform, and support secure development practices</li>
</ul>
<ul>
<li>Build and contribute to a culture of collaborative security excellence through technical leadership, learning sessions, and mentorship within the team and wider organization</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years work experience in an Application Security or related role</li>
</ul>
<ul>
<li>Ability to find vulnerabilities in complex systems, demonstrating business impact through custom attack chains</li>
</ul>
<ul>
<li>Experience with a wide range of secure development activities including, threat modeling, developer education, and incident response</li>
</ul>
<ul>
<li>Knowledge of Python, scripting languages, and AI/agentic workflows to automate tasks, build tools and improve productivity</li>
</ul>
<ul>
<li>Collaborative mindset paired with strong written and verbal communication skills</li>
</ul>
<p>Bonus points</p>
<ul>
<li>Proficiency with Kotlin, gRPC, GraphQL, Kubernetes</li>
</ul>
<ul>
<li>Previous experience as a software engineer</li>
</ul>
<ul>
<li>Consultancy experience performing web application security reviews</li>
</ul>
<ul>
<li>Experience with securing distributed systems in AWS and cloud environments</li>
</ul>
<ul>
<li>Experience with pentesting and securing agentic features and systems</li>
</ul>
<ul>
<li>Contributions to the wider technical community, open source, public research, mentorship, community organizing, blogging, CVEs, presentations, etc</li>
</ul>
<p>Experience submitting to bug bounty programs or responsible disclosure programs</p>
<p>Compensation</p>
<p>The expected salary range for this role is $192,000 - $240,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$192,000 - $240,000</Salaryrange>
      <Skills>Python, Secure development activities, Threat modeling, Developer education, Incident response, AI/agentic workflows, Collaborative mindset, Strong written and verbal communication skills, Kotlin, gRPC, GraphQL, Kubernetes, Software engineering, Web application security reviews, Distributed systems in AWS and cloud environments, Pentesting and securing agentic features and systems, Contributions to the wider technical community</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8249884002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a2c81b27-4e2</externalid>
      <Title>Sr. Engineering Manager, AI/ML Serving Platform</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Sr. Engineering Manager to lead the team that builds the serving and deployment infrastructure for all AI/ML models at Pinterest. The AI/ML Serving Platform team provides foundational tools and infrastructure used by hundreds of AI/ML engineers across Pinterest, including recommendations, ads, visual search, growth/notifications, trust and safety.</p>
<p>The ideal candidate will have experience managing platform engineering teams with many cross-organizational customers, leading the development of large-scale distributed serving systems, and working with AI/ML inference technologies for online serving at Web scale.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the team to deliver continual improvements in advanced model architectures, cost-efficient resource utilization, and AI/ML developer productivity.</li>
<li>Setting technical direction for the team based on company and org priorities.</li>
<li>Coaching and developing talent on the team.</li>
</ul>
<p>In return, you&#39;ll have the opportunity to work on a high-impact project that will shape the future of AI/ML at Pinterest.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$208,592-\$429,454 USD</Salaryrange>
      <Skills>AI/ML inference technologies, PyTorch, TensorFlow, Kubernetes, C++, TorchScript, CUDA Graph</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7569150</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c76014f6-557</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent Runtime)</Title>
      <Description><![CDATA[<p>Build real-time AI agent infrastructure: Design and operate the stateful, low-latency runtime that powers voice and chat AI agents , from LLM streaming and conversation state management to graceful recovery and multi-channel support.</p>
<p>Solve distributed systems problems: Own session management across scaled-out workers , including affinity, checkpointing, crash recovery, and consistency under concurrent access.</p>
<p>Build a function execution platform: Own a serverless-style runtime where customers deploy custom logic , build orchestration, container lifecycle, autoscaling, and versioned rollouts.</p>
<p>Own developer experience and test infrastructure: Build CLI tools, local development environments, and test execution frameworks that let engineers iterate quickly and ship with confidence.</p>
<p>Raise the bar on production quality: Drive observability, incident response, and engineering best practices across the team.</p>
<p>We&#39;re looking for a senior software engineer with 5+ years of experience in infrastructure, platform, or systems work. You should have strong Python and Go skills, as well as a deep understanding of distributed systems, consistency, fault tolerance, state management, and concurrency.</p>
<p>Experience with Kubernetes and cloud-native infrastructure is also required. You should be able to build developer-facing tooling, such as CLIs, SDKs, local dev environments, or internal platforms.</p>
<p>A high bar for code quality, thorough testing, thoughtful code review, and sustainable engineering practices are essential. You should be comfortable operating what you build, on-call, incident response, and production ownership.</p>
<p>AI-native workflow is a must, and you should actively use LLMs and AI-assisted tools in your daily development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Distributed systems, Kubernetes, Cloud-native infrastructure, Developer-facing tooling, Code quality, Testing, Code review, Sustainable engineering practices, LLMs, AI-assisted tools, Real-time voice or streaming media systems, Hands-on with LLM integration, Serverless or function-as-a-service platforms, Workflow engines, Infrastructure-as-code and GitOps workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4675293008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0796e182-42e</externalid>
      <Title>Sr. Software Engineer, Backend (Search Platform)</Title>
      <Description><![CDATA[<p>About Dialpad</p>
<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>
<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>
<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyze conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time.</p>
<p>Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do.</p>
<p>Visit dialpad.com to learn more.</p>
<p>Being a Dialer</p>
<p>At Dialpad, AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>
<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>
<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>
<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>
<p>Your role</p>
<p>Dialpad’s Product Engineering organization is responsible for building and maintaining the customer-facing features at scale across all of our cloud-native products and services.</p>
<p>Every day, millions of users across the world leverage our technology for communicating effectively and efficiently.</p>
<p>Every engineer on our global engineering team is given the opportunity to take ownership of a large portion of the product where they’re able to see immediate results.</p>
<p>Combining natural language processing and artificial intelligence with world-class cloud computing, the things you’ll create at Dialpad will shape the future of work,enabling companies to work from anywhere and making business communication more human.</p>
<p>Dialpad’s Analytics team owns data pipelines, multiple databases, a modular query layer, and rich FE components to deliver intuitive and powerful end-user-facing analytics experiences that allow Dialpad customers to make data-driven business decisions.</p>
<p>Our teams are highly collaborative and comprise cross-disciplinary professionals, including Product Managers, Designers, QA specialists, as well as Engineers specialising in Data Engineering, Data Science, and Telephony.</p>
<p>This position reports to the Engineering Manager, who is based in Bengaluru, and the role will be based in our Bengaluru, India Office.</p>
<p>The position will require a hybrid working arrangement based out of our Bengaluru office.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Contribute to the design, development, and maintenance of information retrieval and distributed systems.</li>
</ul>
<ul>
<li>Build and optimize search engines, including indexers, analyzers, ranking, and re-ranking strategies.</li>
</ul>
<ul>
<li>Work on hybrid search techniques, including dense vector manipulation, rank fusion, and reranking.</li>
</ul>
<ul>
<li>Maintain and enhance highly scalable search platforms with a focus on performance and cost efficiency.</li>
</ul>
<ul>
<li>Ensure high availability, reliability, and fault tolerance in search services.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Develop and optimize real-time distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Implement and maintain monitoring, alerting, and performance metrics for platform reliability.</li>
</ul>
<ul>
<li>Evaluate and integrate emerging technologies to improve search capabilities.</li>
</ul>
<ul>
<li>Write clean, modular, and well-tested code while following best engineering practices.</li>
</ul>
<ul>
<li>Participate in code reviews to ensure quality, maintainability, and scalability.</li>
</ul>
<ul>
<li>Provide mentorship and technical guidance to junior engineers.</li>
</ul>
<p><strong>Skills you’ll bring</strong></p>
<ul>
<li>4-7 years of experience in information retrieval or distributed systems engineering.</li>
</ul>
<ul>
<li>Strong understanding of search platforms and experience maintaining search engines at scale.</li>
</ul>
<ul>
<li>Deep knowledge of indexers, analyzers, field mapping, and ranking techniques.</li>
</ul>
<ul>
<li>Experience with NLP/NLU within the context of information retrieval.</li>
</ul>
<ul>
<li>Expertise in dense vector manipulation and optimization.</li>
</ul>
<ul>
<li>Familiarity with hybrid search, rank fusion, and reranking techniques.</li>
</ul>
<ul>
<li>Proficiency in Go and Python 3 (experience with Rust or TypeScript is a plus).</li>
</ul>
<ul>
<li>Strong understanding of distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Passion for real-time performance optimization and high availability.</li>
</ul>
<ul>
<li>Experience with API design using Swagger, OpenAPI, or equivalent tools.</li>
</ul>
<ul>
<li>Knowledge of gRPC or equivalent RPC protocols.</li>
</ul>
<ul>
<li>Experience with Docker and Kubernetes for containerized deployments.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (GCP preferred, AWS/Azure optional).</li>
</ul>
<ul>
<li>Hands-on experience with Infrastructure as Code tools like Terraform or Ansible.</li>
</ul>
<ul>
<li>Knowledge of CI/CD frameworks and continuous delivery practices.</li>
</ul>
<p>Why Join Dialpad</p>
<ul>
<li>Work at the center of the AI transformation in business communications</li>
</ul>
<ul>
<li>Build and ship agentic AI products that are redefining how companies operate</li>
</ul>
<ul>
<li>Join a team where AI amplifies every employee’s impact</li>
</ul>
<ul>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>
<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>
<p>Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>
<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>
<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>information retrieval, distributed systems engineering, search platforms, indexers, analyzers, field mapping, ranking techniques, NLP/NLU, dense vector manipulation, optimization, hybrid search, rank fusion, reranking, Go, Python 3, API design, gRPC, Docker, Kubernetes, cloud platforms, Infrastructure as Code, CI/CD frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8340906002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86dc459d-a0f</externalid>
      <Title>Senior Software Engineer, Platform as a Service</Title>
      <Description><![CDATA[<p>We are seeking a technical, hands-on, empathetic senior software engineer to help define and deliver our Platform as a Service (PAAS) mission. As a senior engineer on the PAAS team, you will collaborate with the team to deliver forward-looking, customer-centric tooling. Your expertise in building and using best-in-class infrastructure tools will equip our engineering organisation with tools to move quickly and deliver features that bring millions of people together.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Working with customer engineering teams to ensure we’re building solutions that developers love using day-in and day-out</li>
<li>Collaborating with the Internal Development Experience (IDX) team to ensure an easy path to go from development through staging into production</li>
<li>Working with the Platform Security team in order to secure every path to production</li>
<li>Shipping Rust code to YAY, our in-house deployment tooling built around Google Kubernetes Engine and Temporal</li>
<li>Exposing the full flexibility of Kubernetes for users while abstracting the complexities away</li>
<li>Building tools to manage the configuration, observability, and scaling characteristics of our infrastructure</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience in software development with a focus on tooling, infrastructure, and automation</li>
<li>Experience working in multi-milestone and even multi-quarter projects</li>
<li>Expertise and empathy when troubleshooting issues with customer engineering teams</li>
<li>Expertise using and building upon the primitives of standard cloud infrastructure tooling like Kubernetes, Docker</li>
<li>Experience developing in cloud-based environments (we use Google Cloud; knowledge of Amazon Web Services and/or Azure also great!)</li>
<li>Experience with infrastructure-as-code tooling (we use Terraform)</li>
</ul>
<p>Bonus points for experience with CI, build, and deployment technologies like Buildkite, Bazel, and Terraform, as well as cloud networking tools like istio, envoy, etc. and application observability tools like Datadog and/or Sentry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Rust, Kubernetes, Docker, Terraform, Google Cloud, Amazon Web Services, Azure, CI/CD, infrastructure-as-code, Buildkite, Bazel, istio, envoy, Datadog, Sentry</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, with a strong focus on gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8409021002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f95ac4b6-a7c</externalid>
      <Title>Software Engineer - Delivery Platform</Title>
      <Description><![CDATA[<p>At Squarespace, we&#39;re reimagining how people bring their ideas to life online. Our Infrastructure Engineering teams are at the heart of that mission --- building the platforms and tooling that let every engineer ship with speed and confidence.</p>
<p>As a Software Engineer on the Delivery team, you&#39;ll work on the systems that sit between GitHub and production. These systems include nearly every Squarespace service, such as CI/CD pipelines, GitOps workflows, and the deployment platform that spans our Kubernetes clusters and regions. If you&#39;re passionate about developer experience, modern deployment tooling, and making other engineers more productive, we want to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve the platform that ships Squarespace services to production --- CI/CD pipelines, GitOps workflows, and deployment tooling across Kubernetes clusters.</li>
<li>Increase adoption of modern deployment tooling across high-traffic services</li>
<li>Design reusable Helm charts, GitOps templates, and standardized rollout/rollback patterns for engineering teams.</li>
<li>Identify improvements to CI pipeline performance and reliability across the organization.</li>
<li>Contribute to AI-assisted delivery tooling that helps engineers self-serve and diagnose build failures.</li>
<li>Develop technical documentation to ensure knowledge sharing and reusability.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of backend or platform engineering experience.</li>
<li>Experience building or improving CI/CD pipelines (e.g., Drone, Jenkins, GitHub Actions, Harness).</li>
<li>Knowledge of Docker and Kubernetes.</li>
<li>Familiarity with GitOps tooling such as Argo CD or Flux.</li>
<li>Proficiency in Go, Python, or Java.</li>
<li>Experience with Google Cloud, AWS, or Azure.</li>
<li>Comfortable with Agile methodologies and Git.</li>
<li>Experience troubleshooting issues with users.</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>A choice between medical plans with an option for 100% covered premiums</li>
<li>Fertility and adoption benefits</li>
<li>Access to supplemental insurance plans for additional coverage</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Retirement benefits with employer match</li>
<li>Flexible paid time off</li>
<li>12 weeks paid parental leave and family care leave</li>
<li>Pretax commuter benefit</li>
<li>Education reimbursement</li>
<li>Employee donation match to community organizations</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
<li>Dog-friendly workplace</li>
<li>Free lunch and snacks</li>
<li>Private rooftop</li>
<li>Hack week twice per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,500 - $178,250 USD</Salaryrange>
      <Skills>backend or platform engineering experience, CI/CD pipelines, Docker, Kubernetes, GitOps tooling, Go, Python, Java, Google Cloud, AWS, Azure, Agile methodologies, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and is headquartered in New York City.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7789058</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae849446-fe5</externalid>
      <Title>Site Reliability Engineer - Cybersecurity</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Cybersecurity / SRE team at xAI is focused on ensuring the security and reliability of X Money. This role will primarily focus on the X Money platform but will also cross over with the X Social platform.</p>
<p>You&#39;ll be responsible for securing and maintaining the reliability of X Money&#39;s infrastructure. You&#39;ll work closely with cross-functional teams to enhance security measures, improve system resilience, and implement best practices.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and secure mission-critical applications in a hybrid cloud environment.</li>
<li>Manage identities and roles effectively.</li>
<li>Monitor and remediate infrastructure to comply with regulations and best practices (e.g., PCI, NIST CSF).</li>
<li>Maintain a SIEM and all data pipelines needed for reliable alerting.</li>
<li>Design and implement secure container standards and automation to enable frictionless developer workflows.</li>
<li>Maintain Kubernetes security aligned with current best practices.</li>
<li>Build, deploy, and maintain security operations infrastructure using Python, Terraform, and Puppet.</li>
<li>Secure and enhance CI/CD pipelines.</li>
<li>Integrate and maintain code scanning platforms.</li>
<li>Develop dashboards and alerts from security metrics.</li>
<li>Own security projects: identify issues and implement solutions.</li>
<li>Apply critical analysis and problem-solving skills.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven experience securing hybrid AWS/on-premises environments, including IAM and overall security posture.</li>
<li>Strong proficiency in Python, Terraform, and Puppet.</li>
<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>
<li>Deep expertise in Kubernetes and container security.</li>
<li>Hands-on expertise building GitHub Actions and workflows.</li>
<li>Extensive experience with Prometheus, Grafana, CloudWatch, and Karma.</li>
<li>Well versed in management and integrations of Wazuh</li>
<li>Hands-on experience with security scanning tools (Semgrep, Trivy, Falco).</li>
<li>Proactive mindset with strong ownership and problem-solving skills.</li>
<li>Excellent critical thinking and analytical abilities.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Terraform, Puppet, Kubernetes, container security, GitHub Actions, Prometheus, Grafana, CloudWatch, Karma, Wazuh, security scanning tools, critical analysis, problem-solving skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803447007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c2b1fd1-6ca</externalid>
      <Title>Staff Software Engineer- AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer on the AI Workload Orchestration Platform team, you will act as a technical leader for CoreWeave&#39;s Kubernetes-native orchestration strategy for AI workloads.</p>
<p>You will define and evolve the architecture for how AI workloads are admitted, scheduled, and governed across large GPU clusters using frameworks such as Kueue, Volcano, and Ray. This platform serves as a strategic complement to SUNK (Slurm on Kubernetes) and underpins both training and inference workloads across the CoreWeave cloud.</p>
<p>This role requires strong systems thinking, cross-team influence, and a long-term view of platform scalability, reliability, and developer experience.</p>
<p>You will own the technical vision and architecture for major portions of the AI Workload Orchestration Platform, design scalable, reliable orchestration primitives for AI workloads across multiple schedulers and runtimes, lead cross-team architecture reviews and drive alignment across infrastructure, CKS, and managed inference teams, define platform standards for reliability, observability, capacity management, and operational excellence, identify and resolve systemic performance, scalability, and fairness issues across large GPU clusters, mentor senior engineers and grow technical leadership within the organization, and represent the platform in technical reviews and influence broader CoreWeave platform strategy.</p>
<p>You will be responsible for leading technical initiatives across teams without direct authority, owning mission-critical systems at scale, and having a strong operational mindset. You will also have the opportunity to mentor senior engineers and grow technical leadership within the organization.</p>
<p>If you&#39;re a strong systems thinker with a passion for AI and cloud computing, this could be the perfect opportunity for you to join a team of innovators and help shape the future of AI workload orchestration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, Kubernetes, Distributed systems, Cloud platforms, Kueue, Volcano, Ray, AI infrastructure, ML platforms, HPC, Large-scale batch and streaming systems, Scheduling concepts, Fairness, Pre-emption, Quota management, Multi-tenant isolation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647586006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>973b554f-cde</externalid>
      <Title>Senior Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>
<p>As a senior software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>
<p>Our backend teams span many domains across our essential service platforms, including distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>
<p>You will deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</p>
<p>You will also build reliable, scalable services using Scala, Kubernetes, and data pipelines using Spark and Databricks to power the pricing infrastructure that serves millions of cluster-hours per day.</p>
<p>Additionally, you will develop product features that empower customers to easily view and control platform usage.</p>
<p>We look for candidates with a BS (or higher) in Computer Science or a related field, 3+ years of production-level experience in Java, Scala, C++, or a similar language, experience developing large-scale distributed systems, and good knowledge of SQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, SQL, Kubernetes, Spark, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8029671002</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>190bd9e9-0d1</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>By joining this team, you’ll have a direct impact on the reliability and operational excellence of Anthropic’s research and product systems.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we’re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a4f3b913-99b</externalid>
      <Title>Staff Software Engineer - Security Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Security Infrastructure team. As a member of this team, you will be responsible for designing and implementing secure infrastructure systems that protect our customers&#39; data. Your work will have a significant impact on the security and reliability of our platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement secure infrastructure systems that protect our customers&#39; data</li>
<li>Collaborate with cross-functional teams to identify and address security risks</li>
<li>Develop and maintain secure coding practices and standards</li>
<li>Participate in code reviews and provide feedback to ensure high-quality code</li>
<li>Stay up-to-date with the latest security threats and technologies</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in data security or related areas</li>
<li>Expertise in two or more of the following: Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</li>
<li>10+ years of experience building large-scale distributed systems with high availability</li>
<li>Leadership skills and experience to lead across functional and organizational lines</li>
<li>Strong communication skills to explain and evangelize data security to senior leaders across the company</li>
<li>Bias to action and passion for delivering high-quality solutions</li>
<li>MS or Ph.D. in Computer Science or related fields</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $190,900-$253,750 USD.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>Databricks is an equal opportunity employer and welcomes applications from diverse candidates.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,900-$253,750 USD</Salaryrange>
      <Skills>Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7994770002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0282a9-9ee</externalid>
      <Title>Staff Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>
<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>
<li>Develop and refine monitoring and alerting to enhance system reliability.</li>
<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>
<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>
<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>
<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>
<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>
<li>Proven track record of leading incident management and post-mortem analysis.</li>
<li>Excellent problem-solving, analytical, and communication skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience running and scaling observability tools as a cloud provider.</li>
<li>Experience administering large-scale kubernetes clusters.</li>
<li>Deep understanding of data-streaming systems.</li>
</ul>
<p>The base salary range for this role is $188,000 to $250,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider for AI, founded in 2017 and listed on Nasdaq since March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577361006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>76d3f53b-3c6</externalid>
      <Title>Staff Software Engineer, Quality and Release Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Staff Software Engineer to join our Quality and Release Platform (QARP) team and lead the technical direction of the platforms that power how dbt Labs builds, tests, and ships software.</p>
<p>Our mission spans two critical areas: release engineering , making it easy for engineers to ship changes quickly, safely, and reliably , and code quality , building a platform that raises the bar for code quality across all of dbt Labs engineering.</p>
<p>In this role, you&#39;ll work with tools like Helm, ArgoCD, Terraform, Python, GitHub Actions, and Kargo to architect and scale our deployment systems, while also helping design and build the tooling, frameworks, and automation that enable engineering teams to consistently produce high-quality code.</p>
<p>This is a high-impact, staff-level role where you&#39;ll set architectural direction, mentor engineers, and drive initiatives that improve developer velocity, code quality, and reliability across the entire engineering organization.</p>
<p>Responsibilities</p>
<ul>
<li>Define and drive the technical strategy and architecture for our CI/CD platform, release management systems, and code quality platform.</li>
</ul>
<ul>
<li>Design and build tooling, frameworks, and automation that help engineering teams maintain and improve code quality across the organization.</li>
</ul>
<ul>
<li>Lead high-impact initiatives that improve automation, observability, and self-service capabilities for engineers across the organization.</li>
</ul>
<ul>
<li>Mentor and level up other engineers on the team, fostering a culture of technical excellence and continuous improvement.</li>
</ul>
<ul>
<li>Collaborate across teams and with engineering leadership to identify systemic challenges in our delivery and quality processes and architect solutions to address them.</li>
</ul>
<ul>
<li>Evolve our release architecture to support dbt Cloud&#39;s multi-cloud, cell-based infrastructure at scale.</li>
</ul>
<ul>
<li>Establish best practices and standards for build pipelines, release workflows, code quality, and infrastructure-as-code that are adopted across engineering.</li>
</ul>
<ul>
<li>Serve as a thought leader in engineering&#39;s internal AI strategy , evaluating AI-assisted development tools, defining adoption practices and guardrails, and enabling developers to use AI effectively across the org.</li>
</ul>
<p>Requirements</p>
<ul>
<li>8+ years of software engineering experience, with significant time in platform, infrastructure, release engineering, or developer tooling.</li>
</ul>
<ul>
<li>A track record of leading technical strategy and architecture for complex, production-scale CI/CD, code quality, or platform systems.</li>
</ul>
<ul>
<li>Deep experience with one or more of the following: Helm, ArgoCD, Terraform, GitHub Actions, or Kubernetes.</li>
</ul>
<ul>
<li>Strong background in Python, Go, or Rust for automation, platform tooling, or systems development.</li>
</ul>
<ul>
<li>Passion for code quality and experience building or improving tools, linters, static analysis, testing frameworks, or CI checks that help teams write better code.</li>
</ul>
<ul>
<li>Demonstrated ability to drive cross-team initiatives and influence engineering-wide practices and standards.</li>
</ul>
<ul>
<li>Excellent communication skills , able to translate complex technical concepts for diverse audiences and lead through influence.</li>
</ul>
<ul>
<li>Demonstrated interest or hands-on experience with AI-assisted development tools and practices, with a perspective on how AI can improve engineering productivity and code quality.</li>
</ul>
<ul>
<li>Experience working asynchronously as part of a fully remote, distributed team.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Experience with Kargo or similar progressive delivery systems.</li>
</ul>
<ul>
<li>Hands-on experience with multi-cloud architectures (AWS, GCP, Azure).</li>
</ul>
<ul>
<li>Experience building code quality platforms, static analysis tooling, or testing infrastructure at scale.</li>
</ul>
<ul>
<li>Experience defining and rolling out engineering-wide code quality standards or best practices.</li>
</ul>
<ul>
<li>A track record of improving developer productivity or release safety across a large engineering organization.</li>
</ul>
<ul>
<li>Experience mentoring engineers and shaping team culture in a staff or principal-level role.</li>
</ul>
<ul>
<li>Track record of evaluating, championing, and rolling out AI developer tools (e.g., Copilot, Cursor, Claude Code) within an engineering organization.</li>
</ul>
<ul>
<li>Experience defining guidelines, guardrails, or best practices for AI-assisted development.</li>
</ul>
<p>Compensation &amp; Benefits</p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $207,000 - $251,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $230,000 - $279,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<p>Our Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews - Technical (3 rounds, 60 Mins each)</li>
</ul>
<ul>
<li>Values Interview (30 Mins)</li>
</ul>
<p>#LI_RC1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Helm, ArgoCD, Terraform, Python, GitHub Actions, Kargo, Kubernetes, multi-cloud architectures, code quality platforms, static analysis tooling, testing infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, used by over 90,000 teams every week, with annual recurring revenue exceeding $100 million.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4666468005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1903386-87b</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>
<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>
<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>
<p>Automate operations and engineering.</p>
<p>Focus on automation so we can spend energy where it matters.</p>
<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>
<p>Deep proficiency with coding languages such as Golang or Python.</p>
<p>Deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>
<p>Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>
<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>
<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>
<p>Production experience with database software such as PostgreSQL.</p>
<p>Experience with GitOps tooling such as Flux or Argo.</p>
<p>Experience with CI/CD such as GitHub Actions.</p>
<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>
<p>Compensation includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower team members to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4535898008</Applyto>
      <Location>Germany (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26212e9e-5a8</externalid>
      <Title>Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>
<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>
<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>
<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>
<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>
<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>
<li>Deep proficiency with coding languages such as Golang or Python.</li>
<li>Deep familiarity with container-related security best practices.</li>
<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>
<li>Experience with GPU-enabled clusters is a bonus.</li>
<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>
<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>
<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>
<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>
<li>Production experience with database software such as PostgreSQL.</li>
<li>Experience with GitOps tooling such as Flux or Argo.</li>
<li>Experience with CI/CD such as GitHub Actions.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>
<li>Flexible vacation time to promote a healthy work-life blend.</li>
<li>Paid parental leave to support you and your family.</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5113847008</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7ad63033-e7e</externalid>
      <Title>Senior Security Engineer I, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Senior Security Engineer I to join our Vulnerability Management team. This is an execution-focused role where you will perform hands-on triage, drive remediation follow-through, and improve day-to-day operational quality across cloud and specialized infrastructure environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Performing hands-on vulnerability triage and risk assessment using team-defined standards and playbooks</li>
<li>Tracking remediation progress with owner teams, escalating blockers, and ensuring clean issue closure</li>
<li>Supporting automated triage workflows by validating outputs and improving signal quality</li>
<li>Contributing to automated remediation campaigns (e.g., EOL cleanup, vulnerable software upgrades, and fix verification)</li>
<li>Supporting zero-day and embargo response by helping inventory affected assets and tracking owner-team deployment status</li>
<li>Participating in incident investigations by gathering technical evidence and supporting impact analysis</li>
<li>Participating in on-call rotation for critical vulnerability events</li>
<li>Maintaining high-quality documentation, runbooks, and operational updates</li>
</ul>
<p>The ideal candidate will have 3+ years of relevant experience in vulnerability management, security operations, application security, or related security engineering. Key skills include a strong understanding of vulnerability assessment fundamentals, hands-on experience with vulnerability management platforms, proficiency in scripting/automation for workflow support, and familiarity with cloud security concepts.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid life insurance, and flexible PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>vulnerability management, security operations, application security, vulnerability assessment fundamentals, vulnerability management platforms, scripting/automation for workflow support, cloud security concepts, security automation/SOAR platforms, container/Kubernetes vulnerability workflows, hardware-adjacent vulnerability domains, compliance evidence collection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4654263006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f79572c2-264</externalid>
      <Title>Technical Support Engineer</Title>
      <Description><![CDATA[<p>The Technical Support Engineer prolet acts as a Starburst SME for a book of Majors and Strategic accounts. The role involves providing support for standard and custom deployments, answering technical questions, and assisting with supported LTS upgrades. The engineer will also be responsible for peer training and development, personal continued education, and contributing to reference documentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide support for standard and custom deployments</li>
<li>Answer break/fix and non-break/fix technical questions through SFDC ticketing system</li>
<li>Efficiently reproduce reported issues by leveraging tools (minikube, minitrino, docker-compose, etc.), identify root causes, and provide solutions</li>
<li>Open SEP and Galaxy bug reports in Jira and feature requests in Aha!</li>
</ul>
<p>LTS Upgrades:</p>
<ul>
<li>Provide upgrade support upon customer request</li>
<li>Customer must be on a supported LTS version at the time of request</li>
<li>TSE must communicate unsupported LTS requests to the Account team as these require PS services</li>
</ul>
<p>Monthly Technical check-ins</p>
<ul>
<li>Conduct regularly scheduled technical check-ins with each BU</li>
<li>Discuss open support tickets, provide updates on product bugs and provide best practice recommendations based on your observations and ticket trends</li>
</ul>
<ul>
<li>Responsible for ensuring customer environments are on supported LTS versions</li>
</ul>
<p>Knowledge Sharing/Technical Enablement:</p>
<ul>
<li>Knowledge exchange and continued technical enablement are crucial for the development of our team and the customer experience</li>
<li>It&#39;s essential that we keep our product expertise and documentation current and that all team members have access to information</li>
</ul>
<ul>
<li>Contribute to our reference documentation</li>
<li>Lead peer training</li>
<li>Consultant to our content teams</li>
<li>Own your personal technical education journey</li>
</ul>
<p>Project Involvement</p>
<ul>
<li>Contribute to or drive components of departmental and cross-functional initiatives</li>
</ul>
<p>Partner with Leadership</p>
<ul>
<li>Identify areas of opportunity with potential solutions for inefficiencies or obstacles within the team and cross-functionally</li>
<li>Provide feedback to your manager on continued ed. opportunities, project ideas, etc.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of support experience</li>
<li>3+ years of Big Data, Docker, Kubernetes and cloud technologies experience</li>
<li>Ability to Travel: This role will require 25% in-person travel for purposes including but not limited to new hire onboarding, team and department offsites, customer engagements, and other company events</li>
</ul>
<p>Skills</p>
<ul>
<li>Big Data (Hadoop, Data Lakes, Spark)</li>
<li>Docker and Kubernetes</li>
<li>Cloud technologies (AWS, Azure, GCP)</li>
<li>Security - Authentication (LDAP, OAuth2.0) and Authorization technologies</li>
<li>SSL/TLS</li>
<li>Linux Skills</li>
<li>DBMS Concepts/SQL Exposure Languages: SQL, Java, Python, Bash</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Docker, Kubernetes, Cloud technologies, Security, Linux Skills, DBMS Concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a data platform company that provides analytics, applications, and AI services. It has customers in over 60 countries.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/5124882008</Applyto>
      <Location>Hyderabad, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac95264-313</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>
<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4802840008</Applyto>
      <Location>Romania (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ace16925-ba7</externalid>
      <Title>Engineering Manager - Platform (FinHub)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking an experienced Engineering Manager to lead the Ledger team within the Product Foundations - Platform Product Group.</p>
<p>Ledger is one of the core services for Coinbase, responsible for processing transactions and managing the funds of our users.</p>
<p>As one of Coinbase&#39;s most mission-critical services, Ledger sits at the core of our platform, processing billions in transactions and securing the assets of millions of users.</p>
<p>Today, our scale and complexity of operations have far surpassed the original design of Ledger and fund management systems.</p>
<p>This presents a rare and exciting opportunity to rearchitect foundational infrastructure that will shape Coinbase&#39;s success for the next decade.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and manage engineering teams, to guide the development of features, services, and infrastructure.</li>
</ul>
<ul>
<li>Coach your direct reports to have a positive impact on the organization and support their career growth.</li>
</ul>
<ul>
<li>Implement the multi-year strategy for our team and collaborate with engineers, designers, product managers and senior leadership to turn our vision into a tangible roadmap every quarter.</li>
</ul>
<ul>
<li>Be a thoughtful technical voice within the team, aiding in diligent architectural decisions and fostering a culture of high-quality code and engineering processes.</li>
</ul>
<ul>
<li>Collaborate with Product and Engineering teams to ensure successful delivery and operation of multi-tenanted, distributed systems at scale.</li>
</ul>
<ul>
<li>Work closely with our talent organization to identify and recruit exceptional engineers who align with Coinbase&#39;s culture and will help the team scale.</li>
</ul>
<ul>
<li>Contribute to and take ownership of processes that drive engineering quality and meet our engineering SLAs.</li>
</ul>
<p>What We Look For In You:</p>
<ul>
<li>At least 7 years of experience in software engineering.</li>
</ul>
<ul>
<li>At least 2 years of engineering management experience.</li>
</ul>
<ul>
<li>You possess a strong understanding of what constitutes high-quality code and effective engineering practices.</li>
</ul>
<ul>
<li>An execution-focused mindset, capable of navigating through ambiguity and delivering results.</li>
</ul>
<ul>
<li>An ability to balance long-term strategic thinking with short-term planning.</li>
</ul>
<ul>
<li>Experience in creating, delivering, and operating multi-tenanted, distributed systems at scale.</li>
</ul>
<ul>
<li>You can be hands-on when needed – whether that’s writing/reviewing code or technical documents, participating in on-call rotations and leading incidents, or triaging/troubleshooting bugs.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human‑in‑the‑loop practices to deliver business‑ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Prior experience leading a Platform or similar domain team.</li>
</ul>
<ul>
<li>Experience designing and operating ledgering or trading systems at scale.</li>
</ul>
<ul>
<li>Experience with financial data, accounting systems, or high-precision transaction processing.</li>
</ul>
<ul>
<li>Experience with Go, Kubernetes, Postgres, or similar technologies.</li>
</ul>
<ul>
<li>You have gone through a rapid growth in your company (from startup to mid-size).</li>
</ul>
<ul>
<li>Crypto-forward experience, including familiarity with onchain activity such as interacting with Ethereum addresses, using ENS, and engaging with dApps or blockchain-based services.</li>
</ul>
<p>Job #: P76571</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$218,025-$256,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>software engineering, engineering management, high-quality code, effective engineering practices, execution-focused mindset, long-term strategic thinking, short-term planning, multi-tenanted, distributed systems, generative AI tools, copilots, LibreChat, Gemini, Glean, Platform or similar domain team, ledgering or trading systems, financial data, accounting systems, high-precision transaction processing, Go, Kubernetes, Postgres, similar technologies, rapid growth, crypto-forward experience, onchain activity, Ethereum addresses, ENS, dApps or blockchain-based services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7790065</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f6ef3b1-c9b</externalid>
      <Title>Technical Program Manager, Compute</Title>
      <Description><![CDATA[<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>
<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>
<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>
<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>
<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>
<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>
<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>
<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>
<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>
<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>
<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>
<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>
<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>
<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>
<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>
<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>
<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>
<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>
<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>
<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Cloud Infrastructure, Cluster Management, Job Scheduling, Resource Orchestration, Compute Capacity Management, GPU or Accelerator Infrastructure, Observability for Infrastructure Systems, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5138044008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44cc0923-626</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking experienced senior engineers to join our backend teams. As a backend engineer, you will work cross-functionally with various teams and contribute to the design and development of our backend services.</p>
<p>This position will be a hybrid role based in our Bengaluru office, with 2 days on-site as part of our expanding site. EarnIn provides excellent benefits for our employees, including healthcare, internet/cell phone reimbursement, a learning and development stipend, and potential opportunities to travel to our Palo Alto HQ.</p>
<p>Our salary ranges are determined by role, level, and location.</p>
<p>Responsibilities:</p>
<ul>
<li>Design &amp; implement features robust enough for our large scale.</li>
<li>Drive the implementation of new features,break complex problems down to their bare essentials, translate that complexity into elegant design, and create high-quality, maintainable code.</li>
<li>Create and maintain test automation to enable continuous integration and development velocity.</li>
<li>Design &amp; deliver thoughtfully crafted REST APIs to drive the interactions between our client applications and backend services.</li>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
<li>Work cross-functionally with other teams (data science, design, product, marketing, analytics).</li>
<li>Leverage a broad skill set and help us implement and learn new technologies quickly.</li>
<li>Provide and receive design and implementation evaluations and improve with each iteration.</li>
<li>Debug production issues across our services infrastructure and multiple levels of our stack.</li>
<li>Think about distributed systems &amp; services and care passionately about producing high-quality code.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of development experience in Software Engineering</li>
<li>Bachelor&#39;s, Master’s, or PhD degree in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
<li>Proficient in at least one modern programming language such as C#, Java, Python, Go, and Scala.</li>
<li>Hands-on experience working with various databases (DynamoDB, MySQL, ElasticSearch) and data pipeline technologies.</li>
<li>Experience with continuous integration and delivery tools.</li>
<li>Experienced in developing and executing functional and integration tests.</li>
<li>Excellent written and verbal communication skills.</li>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs)</li>
<li>Ability to thrive in a fast-paced, dynamic environment and have a bias towards action and results.</li>
<li>Experience with Kubernetes, microservices, and event-driven architecture is a strong plus.</li>
<li>Experience in payments or fintech is a plus.</li>
<li>Experience with payment processors or internal financial systems is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, Java, Python, Go, Scala, DynamoDB, MySQL, ElasticSearch, continuous integration, delivery tools, functional and integration tests, AI-assisted development tools, Kubernetes, microservices, event-driven architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer in earned wage access, providing real-time financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7392209</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc54ed6c-ca0</externalid>
      <Title>Full-Stack Engineer, Core Services (Senior Level)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Full-Stack Engineer to join our Core Services team. As a senior-level engineer, you&#39;ll design, build, and optimise the core systems and management platforms that power the Instabase platform.</p>
<p>This is a high-impact role for a &#39;product-minded engineer&#39;. In our Core Services team, we treat our platform as a product. Because we operate with a lean team, you will have end-to-end ownership: from writing Product Requirement Documents (PRDs) and building the high-performance backend services and scalable infrastructure that support them.</p>
<p>Responsibilities:</p>
<ul>
<li>Full Stack Development: You will function as a product-minded engineer for our internal platform. This involves architecting secure infrastructure (Kubernetes, Docker) and backend services (Go, Python, PostgresDB), while also building the frontend interfaces (React, TS) to support features.</li>
</ul>
<ul>
<li>Developer Experience: Create the internal platforms and dashboards that improve developer velocity, reliability, and observability across the entire organisation.</li>
</ul>
<ul>
<li>Technical Leadership: Act as a technical leader who mentors junior engineers, contributes to the entire infrastructure codebase, and identifies root causes for critical system issues.</li>
</ul>
<p>About you:</p>
<ul>
<li>Education: BS, MS, or PhD in Computer Science, or equivalent experience in a technical field such as Physics or Mathematics.</li>
</ul>
<ul>
<li>Experience: 5+ years of professional software development experience with a strong foundation in CS fundamentals.</li>
</ul>
<ul>
<li>Backend Expertise: Proficiency in Go and Python, with a deep understanding of building scalable backend services and APIs.</li>
</ul>
<ul>
<li>Frontend Expertise: Strong experience with React, TypeScript, and JavaScript for building complex, data-rich web applications.</li>
</ul>
<ul>
<li>Infrastructure &amp; Orchestration: Proficiency with Docker, Kubernetes, and cloud infrastructure (AWS, GCP, or Azure).</li>
</ul>
<ul>
<li>Product Thinking &amp; UI Design: You are comfortable functioning as your own PM and Designer and write technical specs (PRDs) to define how users interact with infrastructure.</li>
</ul>
<ul>
<li>Communication: Excellent communication skills to represent technical and product decisions to the wider engineering team.</li>
</ul>
<p>Good to have:</p>
<ul>
<li>Experience with React Native for mobile or cross-platform applications.</li>
</ul>
<ul>
<li>Prior experience in a startup environment where you handled multi-functional responsibilities (Dev, PM, and Design).</li>
</ul>
<p>Compensation: The base salary range for this role is $190,000 to $205,000 + bonus, equity and US benefits.</p>
<p>US Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
</ul>
<ul>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
</ul>
<ul>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
</ul>
<ul>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
</ul>
<ul>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
</ul>
<ul>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
</ul>
<ul>
<li>Lunch on Us: Enjoy a lunch credit when you&#39;re in the office.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Instabase is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $205,000 + bonus, equity and US benefits</Salaryrange>
      <Skills>Go, Python, PostgresDB, Kubernetes, Docker, React, TypeScript, JavaScript, Cloud infrastructure (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase provides a platform for organisations to solve unstructured data problems using AI.
It has customers representing large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8186577002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae715d1b-bea</externalid>
      <Title>Engineering Manager - Notebook Dataplane</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. In this role, you will lead the Notebook Dataplane team, which is responsible for running user code in the Notebook. We are undergoing an exciting architecture transformation to run stateful user code as a service for the product teams, providing a reliable and low-latency service for the Serverless products.</p>
<p>As the Engineering Manager, you will play a critical role in driving the technical vision, architecture, and execution for the service. You will lead a team of software engineers and recruit new team members to realize the vision.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and driving the stateful user code execution service vision.</li>
<li>Partnering with serverless platform teams to build the service.</li>
<li>Owning the roadmap and execution, ensuring all team deliverables are met with high quality and on schedule.</li>
<li>Defining team best practices for engineering excellence, including design reviews, code quality, testing strategies, and performance optimizations.</li>
<li>Collaborating cross-functionally with teams across the stack.</li>
</ul>
<p>We are looking for an experienced Engineering Manager with a strong track record of technical leadership and impact. The ideal candidate will have 10+ years of software engineering experience, 3+ years of engineering management experience, and expertise in distributed systems, cloud platforms, and modern web application architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,900-$253,750 USD</Salaryrange>
      <Skills>distributed systems, cloud platforms, modern web application architectures, software engineering, engineering management, containers, Kubernetes, system-level skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8190108002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>96d05ee1-799</externalid>
      <Title>Staff Software Engineer, Cluster Orchestration</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence.</p>
<p>Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.</p>
<p>Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.</p>
<p><strong>About the Role</strong></p>
<p>As part of the Cluster Orchestration team, you will play a key role in advancing CoreWeave&#39;s orchestration platform including SUNK (Slurm on Kubernetes) and beyond, our Kubernetes-native foundation that powers AI training and inference at scale.</p>
<p>This is an opportunity to help shape one of the most critical layers of the AI cloud: ensuring workloads run seamlessly, reliably, and efficiently across massive GPU clusters.</p>
<p>By building the systems that eliminate infrastructure bottlenecks and create new orchestration capabilities, you will directly empower customers to innovate faster and push the boundaries of what&#39;s possible with AI.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>As a Staff Engineer, you will be a technical leader shaping the long-term strategy for CoreWeave&#39;s orchestration platform.</p>
<p>You&#39;ll define architectural direction, own critical parts of the orchestration platform and other managed services, and drive cross-org initiatives in scheduling, quota enforcement, and scaling at hyperscale.</p>
<p>You&#39;ll mentor senior engineers, establish org-wide best practices in reliability and observability, and ensure CoreWeave&#39;s orchestration layer evolves to meet the demands of next-generation AI workloads.</p>
<p><strong>Who You Are</strong></p>
<ul>
<li>8+ years of software engineering experience.</li>
</ul>
<ul>
<li>Proven track record designing and operating large-scale distributed systems in production.</li>
</ul>
<ul>
<li>Deep expertise in Slurm/Kubernetes internals and cloud-native development.</li>
</ul>
<ul>
<li>Advanced proficiency in Go and distributed systems design and cloud-native development.</li>
</ul>
<ul>
<li>Experience setting technical direction and influencing cross-team architecture.</li>
</ul>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in CS, EE, or related field.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Familiarity with orchestration and workflow technologies such as Ray, Kubeflow, Kueue, Istio, Knative, or Argo Workflows</li>
</ul>
<ul>
<li>Deep expertise in Slurm/Kubernetes internals.</li>
</ul>
<ul>
<li>Experience with distributed workloads, GPU-based applications, or ML pipelines.</li>
</ul>
<ul>
<li>Knowledge of scheduling concepts like quota enforcement, pre-emption, and scaling strategies.</li>
</ul>
<ul>
<li>Exposure to reliability practices including SLOs, alarms, and post-incident reviews.</li>
</ul>
<ul>
<li>Experience with AI infrastructure and workloads (ML training, inference, or HPC).</li>
</ul>
<ul>
<li>Ability to mentor senior engineers and elevate organizational standards.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p><strong>Salary and Benefits</strong></p>
<p>The base salary range for this role is $185,000 to $275,000.</p>
<p>The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p>We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>What We Offer</strong></p>
<p>The range we&#39;ve posted represents the typical compensation range for this role.</p>
<p>To determine actual compensation, we review the market rate for each candidate which can include a variety of factors.</p>
<p>These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000 to $275,000</Salaryrange>
      <Skills>software engineering, distributed systems, Slurm, Kubernetes, cloud-native development, Go, scheduling, quota enforcement, scaling strategies, reliability practices, SLOs, alarms, post-incident reviews, AI infrastructure, workloads, ML training, inference, HPC, orchestration and workflow technologies, Ray, Kubeflow, Kueue, Istio, Knative, Argo Workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658801006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c9a6540-bc6</externalid>
      <Title>Senior Security Operations Engineer</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. As a Senior Security Operations Engineer, you will focus on preventing, detecting and responding to security threats across Brex&#39;s corporate and cloud environments. You will use existing systems and develop tools to improve our security capabilities.</p>
<p>Our team is responsible for functions across corporate security, detection &amp; response and infrastructure security domains; and we perform systems engineering and automation to support those functions. Security Operations is part of our wider Trust &amp; IT organization which means you will have the opportunity to work closely with Application Security, Corporate Engineering, GRC and IT and to improve security configurations, drive positive employee behaviors and generally work to prevent events from becoming incidents.</p>
<p>You will also help build and maintain our team’s open source project Substation and have the opportunity to contribute to the Brex Tech Blog. You’ll be part of a team that actively contributes to the wider security community and has a commitment to mentorship and engineering excellence.</p>
<p>We’re looking for individuals with a strong background and interest in detecting, responding to, and resolving security incidents and security challenges. You should be comfortable dealing with lots of moving pieces, changing priorities, and new technologies, while having a keen eye for detail. Most importantly, you should be enthusiastic about working with a variety of backgrounds, roles, and people across Brex.</p>
<p>Building a world-class financial service requires world-class security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$192,000 CAD - $240,000 CAD</Salaryrange>
      <Skills>CI/CD systems, DevOps workflows, Cloud environments, Security services and tools, Go and Python programming, Go, Securing distributed systems in AWS, cloud and Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides corporate cards and banking services to businesses.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8339287002</Applyto>
      <Location>Vancouver, British Columbia, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da47baa5-82a</externalid>
      <Title>Senior Staff Software Engineer - IAM</Title>
      <Description><![CDATA[<p>We are seeking a Senior Staff Software Engineer - IAM to join our Trust &amp; Safety team. As a key member of our security engineering discipline, you will be responsible for creating the vision and defining the strategy for our security space. Your impact will be felt across the organization, and you will be instrumental in making Databricks a safer platform for our customers.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Identifying and plugging key gaps in our infrastructure and services to make Databricks safer for our customers</li>
<li>Attracting top talent from across the industry to join our security engineering team</li>
<li>Representing the security engineering discipline throughout the organization, having a powerful voice to make us more data-driven</li>
<li>Representing Databricks at academic and industry conferences &amp; events</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>9+ years of experience in Data Security or related areas and expertise in two or more of the following: Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</li>
<li>15+ years of experience building large scale distributed systems with high availability</li>
<li>Leadership skills and experience to lead across functional and organizational lines</li>
<li>Strong communication skills to explain and evangelize Data Security to senior leaders across the company</li>
<li>Bias to action and passion for delivering high-quality solutions</li>
<li>MS or Ph.D. in Computer Science or related fields</li>
</ul>
<p>In terms of compensation, Databricks is committed to fair and equitable compensation practices. The pay range for this role is $220,400-$297,400 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p>If you&#39;re passionate about data security and want to join a team that&#39;s shaping the future of data and AI, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,400-$297,400 USD</Salaryrange>
      <Skills>Cryptography, Kubernetes Security, Web Security, Governance, Privacy, Trust, Safety, Authentication, Identity Management, Access Control, Key Management, Inter-Service Authentication, Secure Application Frameworks, Detection &amp; Response</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7274557002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d50772ab-afe</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p>We are seeking a Staff / Senior Software Engineer to join our Cloud Inference team. The successful candidate will design and build infrastructure that serves Claude across multiple cloud service providers (CSPs), accounting for differences in compute hardware, networking, APIs, and operational models.</p>
<p>The ideal candidate will have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users. They will also have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
</ul>
<ul>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
</ul>
<ul>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
</ul>
<ul>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
</ul>
<ul>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
</ul>
<ul>
<li>Optimise inference cost and performance across providers,designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
</ul>
<ul>
<li>Contribute to inference features that must work consistently across all platforms</li>
</ul>
<ul>
<li>Analyse observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
</ul>
<ul>
<li>Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
</ul>
<ul>
<li>Strong interest in inference</li>
</ul>
<ul>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
</ul>
<ul>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
</ul>
<ul>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
</ul>
<ul>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p>Preferred skills:</p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
</ul>
<ul>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
</ul>
<ul>
<li>Hands-on experience with capacity management, cost optimisation, or resource planning at scale across heterogeneous environments</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation, batching, caching, and serving strategies</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p>Salary Range: $300,000-$485,000 USD</p>
<p>Experience Level: Staff</p>
<p>Employment Type: Full-time</p>
<p>Workplace Type: Hybrid</p>
<p>Category: Engineering</p>
<p>Industry: Technology</p>
<p>Required Skills:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
</ul>
<ul>
<li>Cloud computing (AWS, GCP, Azure)</li>
</ul>
<ul>
<li>Kubernetes</li>
</ul>
<ul>
<li>Infrastructure as Code</li>
</ul>
<ul>
<li>Container orchestration</li>
</ul>
<ul>
<li>Inference</li>
</ul>
<ul>
<li>Cross-functional collaboration</li>
</ul>
<ul>
<li>Autonomy and self-driven</li>
</ul>
<ul>
<li>Platform-agnostic tooling</li>
</ul>
<ul>
<li>Capacity management</li>
</ul>
<ul>
<li>Cost optimisation</li>
</ul>
<ul>
<li>Resource planning</li>
</ul>
<ul>
<li>LLM inference optimisation</li>
</ul>
<ul>
<li>Machine learning infrastructure</li>
</ul>
<ul>
<li>CI/CD systems</li>
</ul>
<ul>
<li>Multi-region deployments</li>
</ul>
<ul>
<li>Geographic routing</li>
</ul>
<ul>
<li>Global traffic management</li>
</ul>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Rust</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Direct experience working with CSP partner teams</li>
</ul>
<ul>
<li>Building platform-agnostic tooling</li>
</ul>
<ul>
<li>Hands-on experience with capacity management</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$485,000 USD</Salaryrange>
      <Skills>high-performance, large-scale distributed systems, cloud computing (AWS, GCP, Azure), kubernetes, infrastructure as code, container orchestration, inference, cross-functional collaboration, autonomy and self-driven, platform-agnostic tooling, capacity management, cost optimisation, resource planning, llm inference optimisation, machine learning infrastructure, ci/cd systems, multi-region deployments, geographic routing, global traffic management, python, rust, direct experience working with csp partner teams, building platform-agnostic tooling, hands-on experience with capacity management, strong familiarity with llm inference optimisation, experience with machine learning infrastructure, background designing and building ci/cd systems, solid understanding of multi-region deployments, proficiency in python or rust</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9f403a5-e14</externalid>
      <Title>Staff Engineer, Data Services</Title>
      <Description><![CDATA[<p>The Data Platform Team at CoreWeave is seeking a Staff Software Engineer with specialization in database and stream processing to help fulfill the goal of our global datastore strategy and establish communication models for our data flow.</p>
<p>As a member of the Data Platform Team, you will have the opportunity to drive technical decision-making to accelerate delivery, mentor engineers, and grow team capability. You will champion event-driven architecture adoption and build consensus across the organization. You will participate in the company&#39;s data infra strategy planning and initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs.</li>
<li>Develop a stream processing architecture and solve for scalability and reliability.</li>
<li>Improve the performance, security, reliability, and scalability of our data platforms, and related services and participate in the teams on-call rotation.</li>
<li>Establish guidelines, guard rails for data access and storage for stakeholder teams.</li>
<li>Ensure compliance with standards for data protection regulation.</li>
</ul>
<p>To be successful in this role, you will need to have 12+ years of software engineering experience. You should understand the CAP theorem and concurrency models, and be able to clearly define data models and establish guidelines around data management. You should be familiar with one of the distributed NewSQL datastores such as CockroachDB, TiDB, YDB, Yugabyte and/or stream processing tools such as NATS or Kafka.</p>
<p>Additionally, you should have experience with designing and operating systems at scale, API designs and microservices, and Kubernetes and have interest or experience with using it for event-driven and/or stateful orchestration. You should be excited to have the opportunity to contribute to a Kubernetes operator in order to manage data systems.</p>
<p>The base salary range for this role is $188,000 to $250,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>database, stream processing, NewSQL datastores, Kubernetes, API designs, microservices, event-driven architecture, scalability, reliability, security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4592097006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>