<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>bee517db-e9c</externalid>
      <Title>DevOps Engineer (all genders)</Title>
      <Description><![CDATA[<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>
<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>
<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>
<li>Container Orchestration: Kubernetes with Helm</li>
<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>
<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>
<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>
<li>Scripting: Python, Go, Bash</li>
<li>Version Control: GitHub</li>
<li>Collaboration: Jira (Agile)</li>
<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a DevOps Engineer, you will be responsible for:</p>
<ul>
<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>
<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>
<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>
<li>Troubleshooting production issues and participating in capacity planning</li>
<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>
<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>
<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>
<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>
<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>
<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>
<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>
<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>
<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>
<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>
<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>
<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>
<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>
<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with Helm charts and Kubernetes package management</li>
<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>
<li>Experience with designing AWS services-based architectures is a plus</li>
<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>
<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>
<li>Exposure to cost optimization strategies for cloud infrastructure</li>
<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>
<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>
<li>Technology: Work in a modern tech environment</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud, Container Orchestration, Infrastructure as Code, Monitoring &amp; Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2595036</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77c41bb-0ad</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert, you will have direct experience in a wide range of security technologies, tools, and methodologies. The role is suited for an experienced Application Security engineer with proven understanding in enterprise security and AI security and will focus on building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm’s information and computer systems. Millennium is a complex and robust technical environment and securing the Firm from external and internal threats is a top priority.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</li>
<li>Conduct specialized threat modeling, red teaming, and risk assessments for AI/ML models (e.g., testing for prompt injection, model theft, and data poisoning).</li>
<li>Lead risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects.</li>
<li>Engage throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards.</li>
<li>Evangelize AppSec and AI security best practices through developer education, training materials, and outreach.</li>
<li>Design robust security architectures and integrate automated security testing (SAST/DAST/SCA) into CI/CD pipelines.</li>
<li>Partner with Technology, Trading, Legal, and Compliance to create policies and communicate technical risks to non-technical stakeholders.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field.</li>
<li>5+ years’ experience working as an Application Security Engineer, Software Engineer, or similar role.</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs.</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI.</li>
<li>Experience in working with global teams, collaborating on code and presentations.</li>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols.</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions.</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar.</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies.</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus.</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Automated security testing, CI/CD pipelines, Security architectures, Secure configuration principles, Cryptography fundamentals, Encryption protocols, SCM &amp; CI/CD technologies, Security scanning, Vulnerability management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets management, Password vault technologies, Secure programming, Infrastructure as Code tools, Web application security testing tools, Methodologies, Security frameworks, Standards, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a technology-focused organisation that provides infrastructure services to various businesses.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629927</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af8ed06d-a9a</externalid>
      <Title>Forward Deployed Software Engineer - Equities Technology</Title>
      <Description><![CDATA[<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>
<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>
<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>
<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>
<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>
<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides technology solutions to the financial services industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953439247</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a75ea8b-5b4</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies, you will play a key role in building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The successful candidate will have a proven understanding in enterprise security and AI security and will focus on defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks</li>
<li>Conducting specialized threat modeling, red teaming, and risk assessments for AI/ML models</li>
<li>Leading risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects</li>
<li>Engaging throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards</li>
<li>Evangelizing AppSec and AI security best practices through developer education, training materials, and outreach</li>
</ul>
<p>Qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field</li>
<li>5+ years&#39; experience working as an Application Security Engineer, Software Engineer, or similar role</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI</li>
<li>Experience in working with global teams, collaborating on code and presentations</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous</li>
</ul>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Developer education, Training materials, Outreach, Common SCM &amp; CI/CD technologies, GitHub, Jenkins, Artifactory, Security Scanning, Vulnerability Management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets Management &amp; Password Vault technologies, Delinea Secret Server, Hashicorp Vault, Secure programming, Python, Java, C++, C#, Infrastructure as Code tools, CloudFormation, Terraform, Ansible, Web application security testing tools, Methodologies, Security frameworks, Standards, ISO 27001, NIST, OWASP, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a department within a larger organisation that focuses on providing and maintaining the underlying technology infrastructure.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629908</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>038b4893-89b</externalid>
      <Title>IT Audit Lead</Title>
      <Description><![CDATA[<p>We are seeking an IT Audit Lead to join our Management Controls and Internal Audit Group. As an IT Audit Lead, you will be responsible for leading IT audit engagements, planning and carrying out the audit, and continuously working to improve processes and procedures. You will work closely with the Head of Information Technology Audit to develop and maintain an in-depth understanding of the technology organization, business areas, and support functions.</p>
<p>Primary Responsibilities:</p>
<ul>
<li>Lead and perform IT and integrated audit engagements, with support from IT Auditors, focusing on IT core infrastructure, trade execution and trade processing infrastructure, critical applications, and IT general controls;</li>
<li>Build and maintain relationships with key stakeholders, establishing a culture of engagement while adding value;</li>
<li>Develop and maintain an in-depth understanding of the technology organization, business areas, and support functions;</li>
<li>Support the Head of Information Technology Audit with audit planning, scope design, internal control assessment, raising and reporting of issues, and monitoring of remediation plans;</li>
<li>Participate in department-wide initiatives focused on continually improving firm processes and the control environment;</li>
<li>Assist with annual risk assessment process, audit plan creation, and other departmental administrative projects.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>12+ years of IT audit experience with exposure to core IT infrastructure, cyber security, equities trading, fixed-income trading, operations, and/or trade support functions;</li>
<li>Strong analytical and reporting skills and effective relationship-building experience;</li>
<li>Effective communication (verbal and written) and inter-personal skills, with the ability to present sophisticated and sensitive issues to management and inspire change;</li>
<li>Knowledge and experience of core IT infrastructure platforms (e.g., Windows, Unix, Sybase, SQL), cyber security, cloud technology, networks, firewalls, and/or data analytics;</li>
<li>Extensive knowledge of the audit lifecycle and the evaluation of IT general controls and IT automated controls;</li>
<li>Bachelor’s degree in Information Systems, Computer Science/Engineering, or other relevant fields;</li>
<li>A related certification (e.g., CISA, CISSP, CIA) is desired;</li>
<li>Domestic and international travel requirements: 0%-10%.</li>
</ul>
<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $250,000</Salaryrange>
      <Skills>IT audit experience, core IT infrastructure, cyber security, equities trading, fixed-income trading, operations, trade support functions, analytical and reporting skills, relationship-building experience, communication (verbal and written) and inter-personal skills, knowledge of core IT infrastructure platforms, cloud technology, networks, firewalls, data analytics, audit lifecycle, IT general controls, IT automated controls</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Audit</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a company that exists to assist with compliance, legal, and ethics oversight.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953849622</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>46a2bf10-599</externalid>
      <Title>Quantitative Developer (KDB &amp; Python) -  Central Liquidity Strategies</Title>
      <Description><![CDATA[<p>We are seeking a highly driven, results-oriented Quantitative Developer to join a new and dynamic group tasked with developing our next-generation quantitative research platform.</p>
<p>Based in New York, the successful candidate will have strong analytical and problem-solving skills, excellent attention to detail, and the ability to explain sophisticated technical concepts clearly and concisely.</p>
<p>The role requires high autonomy, as much of the senior technical team is based in Dublin and the New York hire will be relied upon heavily in-region.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Contribute to a wide range of projects and deliver quickly and iteratively.</li>
<li>Write, support, maintain, and test code following best practices, including unit testing, documentation, and automation within standard CI/CD processes.</li>
<li>Support key datasets (live and historical), ML models, and the supporting infrastructure spanning multiple technologies, languages, and systems.</li>
<li>Partner with team members to set the overall direction, design, and architecture of the platform; collaborate with key stakeholders across the business.</li>
</ul>
<p>Qualifications / Skills Required:</p>
<ul>
<li>6+ years of kdb+ and Python experience in a quantitative finance setting, with a proven track record of deploying systems at scale.</li>
<li>Bachelor’s degree in Mathematics, Computer Science, Financial Engineering, Operations Research, or similar.</li>
<li>Fluency with enterprise-grade technology used for research and trading analytics; ability to operate independently.</li>
<li>Strong communication skills and the ability to work effectively in a team environment.</li>
</ul>
<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $250,000</Salaryrange>
      <Skills>kdb+, Python, enterprise-grade technology, research and trading analytics, unit testing, documentation, automation, CI/CD processes, key datasets, ML models, supporting infrastructure, PyKX, C++, cash equities, live analytics, cloud tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Electronic Trading Solutions</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Electronic Trading Solutions is a company that provides electronic trading solutions. It has a presence in New York.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954578859</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8e57ca54-2af</externalid>
      <Title>Business Analyst / Project Manager - Equity Volatility</Title>
      <Description><![CDATA[<p>We are seeking a Business Analyst / Project Manager to join our team in London. As a key member of our Equities Volatility business, you will play a crucial role in delivering tools that enable Portfolio Managers to perform research, backtest strategies, and risk manage an equities derivatives portfolio.</p>
<p>In this role, you will work closely with the business to gather and synthesise requirements, analyse data dependencies, and create project plans. You will also be responsible for managing day-to-day project deliverables, highlighting and escalating issues, and resolving conflicts and roadblocks.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Working with the business to gather and synthesise requirements across both investment teams and core business asks</li>
<li>Analysing upstream/downstream data dependencies and creating the necessary BRD/FRD, JIRA, and project plans</li>
<li>Managing day-to-day project deliverables; highlighting, escalating, and resolving issues, conflicts, and roadblocks</li>
<li>Creating user guides and other documentation that will be used to onboard new users to the platform</li>
<li>Creating and maintaining product roadmaps and other project artifacts required to manage stakeholder expectations</li>
<li>Coordinating/ tracking development tasks, testing, and verifying releases to ensure user requirements are being met</li>
</ul>
<p>Required skills include subject matter expertise in Equities Derivatives, working knowledge of Equities Derivatives market data, pricing, and risk management methodologies, hands-on BA/PM experience working on proprietary derivatives pricing, analytics, and risk management systems, and high-level understanding of Asia Equities Options markets and requirements around listed options execution infrastructure.</p>
<p>Qualifications include 6+ years of experience in a relevant BA/PM role, experience working in a scrum environment, working knowledge of JIRA, development background or CFA, and basic understanding of Python and ability to read Java code to extract what is developed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Subject matter expertise in Equities Derivatives, Working knowledge of Equities Derivatives market data, pricing, and risk management methodologies, Hands-on BA/PM experience working on proprietary derivatives pricing, analytics, and risk management systems, High-level understanding of Asia Equities Options markets and requirements around listed options execution infrastructure, Good communication and interpersonal skills, excellent written documentation skills</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides proprietary tools and services to support its Equities Volatility business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955761568</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62c461dc-a98</externalid>
      <Title>Lead Cloud Engineer</Title>
      <Description><![CDATA[<p>For Digital Hub Warsaw, we&#39;re looking for a Lead Cloud Engineer to join our team. As a visionary company, we&#39;re driven to solve the world&#39;s toughest challenges and strive for a world where &#39;Health for all, Hunger for none&#39; is no longer a dream, but a real possibility.</p>
<p>We&#39;re building an enterprise-grade Infrastructure Operations Platform named VOPs to support the facilitation of most complex IT infrastructure operations for all IT teams at Bayer globally. Your responsibilities will include:</p>
<p>Planning and Design: Join the team responsible for planning and running our VOPs platform. Leadership: Mentor a team of engineers, providing guidance and support in the implementation of cloud solutions. Collaboration with Stakeholders: Work closely with Squad Leads and other stakeholders to understand requirements and align integration strategies with business goals. Technical Oversight: Ensure that solutions are scalable, reliable, maintainable, and secure, adhering to best practices in IT architecture and in-line with Bayer&#39;s strategy. Documentation and Standards: Create, maintain, and review comprehensive documentation for processes, standards, and best practices. Intercultural Communication: Foster an environment of open communication and collaboration among diverse teams across different geographical locations.</p>
<p>Our requirements include: Degree in Computer Science, Information Technology, or related field, or equivalent practical experience as an IT engineer. At least 6 years of experience in Azure (other clouds will be a plus). Proficiency in IT Architecture &amp; design, specifically in infrastructure automation, provisioning, and maintenance. Strong analytical skills with the ability to troubleshoot and resolve technical issues effectively, even under pressure. Familiarity with IaC (e.g., Terraform) and strong proficiency in Python. Linux command line tools and shell scripting. Experience with building IT systems in regulated environments. Integration and Automation Expertise: Knowledge of CI/CD processes and experience in building and deploying integration solutions (Azure DevOps, GitHub Repos, and GitHub Actions). Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Experience with API management and/or design will be appreciated. Intercultural Competence: Ability to work collaboratively in a multicultural environment, respecting diverse perspectives and fostering teamwork, establishing and maintaining a robust professional network. Language Proficiency: Fluent in English, both spoken and written.</p>
<p>What we offer includes: A flexible, hybrid work model. Great workplace in a new modern office in Warsaw. Career development, 360° Feedback &amp; Mentoring programme. Wide access to professional development tools, trainings, &amp; conferences. Company Bonus &amp; Reward Structure. VIP Medical Care Package (including Dental &amp; Mental health). Holiday allowance (&#39;Wczasy pod gruszą&#39;). Life &amp; Travel Insurance. Pension plan. Co-financed sport card. FitProfit. Meals Subsidy in Office. Additional days off. Budget for Home Office Setup &amp; Maintenance. Access to Company Game Room equipped with table tennis, soccer table, Sony PlayStation 5, and Xbox Series X consoles setup with premium game passes, and massage chairs. Tailored-made support in relocation to Warsaw when needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Azure, IT Architecture &amp; design, Infrastructure automation, Provisioning, Maintenance, IaC (Terraform), Python, Linux command line tools, Shell scripting, CI/CD processes, Azure DevOps, GitHub Repos, GitHub Actions, API management, API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a global presence.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949973780545</Applyto>
      <Location>Warsaw</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>52261e57-a37</externalid>
      <Title>Senior Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p>You&#39;ll work with modern tooling, a cross-functional team, and teammates who care deeply about impact, collaboration, and learning together.</p>
<p>As a Senior Software Engineer - Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
</ul>
<ul>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
</ul>
<ul>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
</ul>
<ul>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
</ul>
<ul>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
</ul>
<ul>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
</ul>
<ul>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>You don&#39;t need to meet every requirement , we&#39;re looking for strong fundamentals, ownership, and the motivation to grow.</p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
</ul>
<ul>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
</ul>
<ul>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
</ul>
<ul>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
</ul>
<ul>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
</ul>
<ul>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, ML model deployment, LLM tools and agents, Data science models, Reliable and scalable production systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a company that provides a platform for hosting and booking accommodations.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597551</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f156ea4b-6a3</externalid>
      <Title>Senior DataOps Engineer / Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>Join our Dynamic Pricing &amp; Revenue Management team as a Senior DataOps Engineer / Software Engineer. You&#39;ll work alongside a Data Scientist and a Data Analyst to develop a smart, dynamic, and data-driven pricing strategy. Our team uses modern tooling, including S3, Redshift, Athena, DuckDB, MLflow, SageMaker, Terraform, Docker, Jenkins, and AWS EKS.</p>
<p>As a Senior DataOps Engineer / Software Engineer, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You&#39;ll bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>We&#39;re looking for someone with 4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps. You should have strong hands-on skills in Python, experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform), familiarity with cloud platforms (AWS preferred), and deploying services in production. Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</p>
<p>Our team is passionate about using cutting-edge LLM tools and agents to improve productivity. We&#39;re looking for someone who is proactive, hands-on, and takes ownership of problems and drives solutions forward.</p>
<p>Benefits include:</p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment with a pace of a scale-up combined with the stability of a proven business model.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p>If you&#39;re interested in joining our team, apply online on our careers page! Your first travel contact will be Katharina from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, Deploying services in production, ML model deployment, LLM tools and agents</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH operates a platform for holiday rentals, connecting hosts with guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2523360</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>df410cc0-50b</externalid>
      <Title>Equity Derivatives Structurer</Title>
      <Description><![CDATA[<p>As an Equities Structurer, you&#39;ll play a pivotal role in driving innovation and delivering tailored solutions for our clients. You&#39;ll collaborate closely with trading, quantitative, and sales teams to design, price, and implement structured equity products, ensuring we remain at the forefront of the market.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Drive digital transformation by leading digitalisation and automation initiatives to streamline business processes and enhance operational efficiency</li>
<li>Enhance pricing infrastructure by developing and optimising pricing tools and systems, ensuring robust and scalable solutions for both indicative and live pricing of structured transactions</li>
<li>Innovate products by designing and launching new structured products, including Structured Solutions, Risk Recycling, and Quantitative Investment Strategies (QIS)</li>
<li>Deliver client solutions by partnering with sales and structuring teams to assess client needs and provide bespoke solutions aligned with their objectives</li>
<li>Generate market analysis by creating thematic investment ideas based on evolving market conditions and trends</li>
<li>Prepare pitchbooks by creating compelling marketing materials to promote proprietary indices and structured products</li>
<li>Manage secondary market activities by overseeing secondary market pricing, including add-ons and unwind transactions</li>
<li>Collaborate cross-functionally by working with traders and quantitative analysts to enhance pricing and back-testing infrastructure (Python and VBA)</li>
<li>Provide business insight by delivering analytics and reporting to support business decision-making and strategic planning</li>
</ul>
<p>Your Qualifications:</p>
<ul>
<li>Market Knowledge: Strong understanding of equity derivative fundamentals</li>
<li>Analytical Excellence: Exceptional analytical, problem-solving, and decision-making skills</li>
<li>Communication Skills: Outstanding verbal and written communication abilities, with a talent for explaining complex concepts clearly</li>
<li>Agility Under Pressure: Ability to manage multiple priorities and perform effectively in a fast-paced, high-pressure environment</li>
<li>Technical Proficiency: Experience with Python and Excel/VBA is highly desirable</li>
</ul>
<p>As an HSBC employee, you will have access to tailored professional development opportunities to ensure you have the right skills for today and tomorrow. We offer a competitive pay and benefits package including a robust Wellness Hub, all in a welcoming and inclusive work environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>equity derivatives, digital transformation, pricing infrastructure, structured products, market analysis, pitchbooks, secondary market activities, collaboration, business insight, Python, Excel/VBA, data analysis, financial modeling, risk management</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is a multinational banking and financial services organisation with over 40 million customers worldwide.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774609816845</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>978310df-422</externalid>
      <Title>Staff FullStack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our International Public Sector team. As a Full Stack Software Engineer, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You will serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation.</p>
<p>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes.</p>
<p>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments.</p>
<p>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases.</p>
<p>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture.</p>
<p>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team.</p>
<p>Ideally, you&#39;d have a Master&#39;s or PhD in Computer Science or equivalent deep industry experience in architecting complex, distributed systems.</p>
<p>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP).</p>
<p>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability.</p>
<p>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments.</p>
<p>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization.</p>
<p>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction.</p>
<p>Proficient in Arabic.</p>
<p>Nice to haves include past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role, experience working cross functionally with operations, and a proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, LLMs, Cloud computing, Security, Scalability, Distributed systems, Arabic, Startup experience, CTO experience, Founding engineer experience, Forward deployed engineer experience, Customer engineer experience, Operations experience, LLM-driven solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673314005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9a42f26c-511</externalid>
      <Title>Evals Engineer, Applied AI</Title>
      <Description><![CDATA[<p>We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry&#39;s leading GenAI Evaluation Suite.</p>
<p>As a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise, you will partner with Scale&#39;s Operations team and enterprise customers to translate ambiguity into structured evaluation data. This involves guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.</p>
<p>Your responsibilities will also include analysing feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments. You will design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems, including creating models that critique, grade, and explain agent outputs.</p>
<p>To succeed in this role, you will need a strong foundational knowledge of large language models, a passion for tackling complex evaluation challenges, and the ability to thrive in a dynamic, fast-paced research environment. You should be able to think outside the box, stay current with the latest literature in AI evaluation, and be passionate about integrating novel research ideas into our workflows to build best-in-class evaluation systems.</p>
<p>In addition to your technical expertise, you will need excellent communication and collaboration skills, as you will work closely with cross-functional teams to drive project success.</p>
<p>If you are a motivated and detail-oriented individual with a passion for AI research and evaluation, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, Large Language Models, Generative AI, Machine Learning, Applied Research, Evaluation Infrastructure, Advanced degree in Computer Science, Machine Learning, or a related quantitative field, Published research in leading ML or AI conferences, Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems, Experience collaborating with operations or external teams to define high-quality human annotator guidelines, Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4629589005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b40b693d-a0d</externalid>
      <Title>Senior Software Engineer, Agentic Data Products</Title>
      <Description><![CDATA[<p>We&#39;re forming a new Agentic Data Products team focused on building the next generation of agent-powered tools that ground AI in real operational workflows. Our goal is to help enterprises demystify their data layers and deploy intelligent, agentic systems that can reason over data, take action, and deliver measurable outcomes.</p>
<p>This is a 0→1 build team. We’re looking for a sharp, product-minded Senior Engineer who thrives in ambiguity, moves quickly, and enjoys building new systems from scratch alongside customers and cross-functional partners. You’ll work closely with product, forward-deployed engineers, data scientists, and applied AI teams to turn real-world problems into scalable, production solutions.</p>
<p>If you like shipping fast, owning outcomes, and working across the stack,from polished frontends to distributed backends to LLM integrations,this role is for you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from concept and design through production deployment</li>
<li>Build intuitive, high-performance frontend experiences using React + TypeScript</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and AI/ML infrastructure</li>
<li>Integrate LLMs, vector databases, and agentic frameworks to power intelligent workflows and decision-making systems</li>
<li>Ship quickly through tight experimentation loops while maintaining high quality and reliability</li>
<li>Help define the technical direction and architecture of a brand-new team and product surface</li>
<li>Adapt across the stack and learn new tools as needed to solve real problems end-to-end</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time software engineering experience</li>
<li>0-1 product build experience</li>
<li>Familiarity with LLMs, embeddings, vector databases, or modern AI data products/tools</li>
<li>Experience with distributed systems and cloud-based architectures</li>
<li>Prior experience mentoring or leading team</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Strong product intuition and customer empathy</li>
<li>Bias toward action and rapid iteration</li>
<li>Ownership mentality , you see problems through to outcomes</li>
<li>Comfort collaborating across engineering, product, data science, and applied AI</li>
<li>Excitement about building agentic systems that make AI genuinely useful in the real world</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, Distributed systems, Data pipelines, AI/ML infrastructure, LLMs, Vector databases, Agentic frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4653827005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d46c1e8f-b8c</externalid>
      <Title>Strategic Deals Lead, Compute &amp; Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Strategic Deals Lead, Compute &amp; Infrastructure team member to drive the planning and execution of programs critical to Anthropic&#39;s compute infrastructure strategy.</p>
<p>In this role, you will manage internal and external stakeholders to bring clarity to our compute technology roadmaps, help prioritise across technical and non-technical teams, and focus on securing and delivering compute capacity.</p>
<p>As a key member of our team, you will work closely with engineering, finance, and partnership teams to drive execution of technical roadmaps, support deal structuring, and manage the operational aspects of our compute partnerships.</p>
<p>This role combines technical program management with elements of strategic operations, partnership development, and financial analysis.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive cross-functional coordination across Engineering, Finance, and external partners to define, scope, and deliver on compute partnership initiatives</li>
</ul>
<ul>
<li>Develop and maintain detailed project plans, timelines, and status reporting for technical programs related to compute infrastructure and partnerships</li>
</ul>
<ul>
<li>Partner with engineering leaders to translate technical requirements into actionable roadmaps and track execution against milestones</li>
</ul>
<ul>
<li>Support the structuring and negotiation of strategic compute deals, including financial modelling, term analysis, and vendor evaluation</li>
</ul>
<ul>
<li>Build and maintain relationships with key stakeholders at cloud providers and infrastructure partners</li>
</ul>
<ul>
<li>Develop and manage systems, processes, and documentation to support program management efficiency and stakeholder visibility</li>
</ul>
<ul>
<li>Analyse financial and operational data to inform decision-making on compute capacity planning and vendor strategy</li>
</ul>
<ul>
<li>Provide clear and transparent reporting on program status, issues, and risks to leadership</li>
</ul>
<p>You might be a good fit if you have:</p>
<ul>
<li>8-10 years of experience in technical product/program management, business development, or strategic partnerships roles at technology companies</li>
</ul>
<ul>
<li>Experience structuring and negotiating strategic customer deals or partnerships within the technology space (cloud services, semiconductors, data centre/infrastructure)</li>
</ul>
<ul>
<li>Background in cloud computing, data centre infrastructure, compute/silicon development, or technology-focused investment banking or consulting</li>
</ul>
<ul>
<li>Familiarity with data centre infrastructure, compute hardware, and/or silicon development cycles</li>
</ul>
<ul>
<li>Comfort with financial analysis and modelling; experience with vendor financing arrangements is a plus</li>
</ul>
<ul>
<li>Strong interpersonal and communication skills with the ability to influence and align diverse stakeholders</li>
</ul>
<ul>
<li>Ability to drive clarity in ambiguous environments and manage competing priorities with high-quality execution</li>
</ul>
<ul>
<li>A track record of managing cross-functional initiatives in fast-paced, scaling technology environments</li>
</ul>
<ul>
<li>A passion for Anthropic&#39;s mission and ensuring safe AI development</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing external partnerships with large-scale cloud providers or hardware vendors</li>
</ul>
<ul>
<li>Understanding of AI/ML infrastructure requirements and compute capacity planning</li>
</ul>
<ul>
<li>Experience with vendor financing, equipment leasing, or infrastructure investment analysis</li>
</ul>
<ul>
<li>Background in technical due diligence or technology M&amp;A</li>
</ul>
<p>The annual compensation range for this role is $250,000-$310,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$250,000-$310,000 USD</Salaryrange>
      <Skills>Technical product/program management, Business development, Strategic partnerships, Cloud computing, Data centre infrastructure, Compute/silicon development, Financial analysis and modelling, Vendor financing arrangements, Experience structuring and negotiating strategic customer deals or partnerships, Background in technology-focused investment banking or consulting, Familiarity with data centre infrastructure, compute hardware, and/or silicon development cycles, Understanding of AI/ML infrastructure requirements and compute capacity planning, Experience with vendor financing, equipment leasing, or infrastructure investment analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5169670008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d35e26ec-593</externalid>
      <Title>Forward Deployed Engineer, GenAI</Title>
      <Description><![CDATA[<p>About Scale AI</p>
<p>At Scale AI, our mission is to accelerate the development of AI applications.</p>
<p>As a Forward Deployed Engineer, you&#39;ll be at the forefront of providing the critical data infrastructure that powers the most advanced AI models, directly influencing how humanity interacts with AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive Impact: Directly contribute to the advancement of AI by delivering critical data solutions for leading AI innovators and government agencies.</li>
</ul>
<ul>
<li>Customer Collaboration: Interact daily with our technical customers, understanding their unique challenges and translating them into impactful solutions.</li>
</ul>
<ul>
<li>End-to-End Development: Design, build, and deploy features across the entire stack, from front-end interfaces to back-end systems and infrastructure.</li>
</ul>
<ul>
<li>Rapid Experimentation: Deliver high-quality experiments quickly, iterating quickly to meet customer needs and drive innovation.</li>
</ul>
<ul>
<li>Strategic Influence: Play a key role in shaping our engineering culture, values, and processes, contributing to the growth of our team and the evolution of our product.</li>
</ul>
<ul>
<li>Diverse Projects: Engage in a dynamic mix of designing and deploying cutting-edge data solutions, collaborating with leading AI researchers, and directly influencing the product roadmap.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 2 years of relevant experience is preferred</li>
</ul>
<ul>
<li>Proven track record of shipping high-quality products and features at scale.</li>
</ul>
<ul>
<li>Strong problem-solving skills and the ability to work independently or as part of a collaborative team.</li>
</ul>
<ul>
<li>Desire to thrive in a fast-paced, dynamic environment.</li>
</ul>
<ul>
<li>Ability to turn business and product ideas into engineering solutions.</li>
</ul>
<ul>
<li>Strong coding abilities and the ability to effectively communicate complex technical concepts to both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to adapt quickly to the ever-changing world of generative AI.</li>
</ul>
<ul>
<li>Excited to join a dynamic, hybrid team in either San Francisco or New York City.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive health, dental and vision coverage</li>
</ul>
<ul>
<li>Retirement benefits</li>
</ul>
<ul>
<li>A learning and development stipend</li>
</ul>
<ul>
<li>Generous PTO</li>
</ul>
<ul>
<li>Commuter stipend</li>
</ul>
<p>Salary Range:</p>
<p>$179,400-$224,250 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$179,400-$224,250 USD</Salaryrange>
      <Skills>large-scale data processing, distributed systems, machine learning, AI concepts, cloud-based infrastructure, experience working directly with enterprise customers, experience with reinforcement learning with human feedback</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI is a leading AI data foundry that provides high-quality data and full-stack technologies to power the world&apos;s leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4593571005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5cf5141e-a21</externalid>
      <Title>Distinguished Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Distinguished Engineer to shape the vision and technical roadmap of our core AI/ML infrastructure. Reporting directly to the SVP of Engineering, Enterprise AI, this individual will drive long-term technical direction for our Scale Generative AI Platform (SGP), influence architectural decisions across the company, and partner closely with engineering and product leaders to bring advanced AI capabilities to enterprise customers.</p>
<p>You&#39;ll serve as a cross-organizational thought leader - setting standards for technical excellence, mentoring senior engineers, and ensuring our systems and models meet the demands of global-scale deployment. This is a rare opportunity to influence both foundational AI infrastructure and the enterprise AI applications built on top of it.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and drive the technical strategy for Scale&#39;s AI/ML infrastructure and SGP platform, balancing short and long-term investments.</li>
<li>Partner with senior engineering and product leadership to ensure scalable, secure, and performant enterprise AI systems.</li>
<li>Lead architecture and design reviews across multiple teams, ensuring technical consistency and innovation.</li>
<li>Serve as a trusted advisor and mentor to principal engineers and technical leads across the organization.</li>
<li>Evaluate and integrate emerging technologies in AI, distributed systems, and data infrastructure to keep Scale at the frontier of innovation.</li>
<li>Represent Scale externally in the AI community - through speaking engagements, partnerships, and thought leadership.</li>
<li>Drive technical execution and accountability for critical cross-functional initiatives that advance Scale&#39;s enterprise AI capabilities.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>15+ years of experience as a technical software engineering leader.</li>
<li>Proven record of technical leadership at AI-native companies, hyperscalers, or equivalent high-scale environments.</li>
<li>Deep technical expertise in AI/ML infrastructure, knowledge of ML models/algorithm design/implementation and their application to real-world problems; experience with GenAI preferred.</li>
<li>Demonstrated success in setting technical vision and leading cross-organizational initiatives with measurable business impact.</li>
<li>Experience influencing and mentoring engineering teams in complex, matrixed environments.</li>
<li>Ability to communicate and collaborate effectively to create a shared sense of vision or purpose cross-team and cross-functionality.</li>
<li>Advanced degree in Computer Science, Engineering, or related field preferred but not required.</li>
</ul>
<p>Culture &amp; Impact:</p>
<p>At Scale, we believe that AI should amplify human potential - and our engineering culture reflects that belief. Our teams operate at the intersection of innovation, rigor, and impact, solving some of the hardest problems in AI infrastructure and deployment.</p>
<p>The Distinguished Engineer will play a key role in shaping how AI systems are built, deployed, and governed within enterprise environments. This role represents the highest bar of technical excellence at Scale - a trusted voice in setting direction, enabling innovation, and ensuring that our technology scales responsibly and effectively to meet the evolving needs of our customers.</p>
<p>You’ll have the opportunity to influence company-wide strategy, contribute to industry-leading work in generative AI infrastructure, and mentor the next generation of engineering talent pushing the boundaries of what’s possible with AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$285,200-$356,500 USD</Salaryrange>
      <Skills>AI/ML infrastructure, Generative AI, Distributed systems, Data infrastructure, Technical leadership, Cross-functional collaboration, Communication, Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions. It provides high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4632142005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61e346b2-915</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Deadline to apply: None. Applications will be reviewed on a rolling basis.</p>
<p>The annual compensation range for this role is £225,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be1fd1e-8f3</externalid>
      <Title>Principal Architect</Title>
      <Description><![CDATA[<p>We are seeking a Principal Architect to drive the design, development, and deployment of our agentic AI products in a fast-paced, collaborative environment. In this role, you will lead a team of 50+ engineers, providing both strategic and technical guidance. You’ll be responsible for high-impact architectural decisions, cross-company collaboration, and executive level engagements.</p>
<p>Key Responsibilities: Lead and mentor a high-performing engineering team of 50+, fostering a culture of technical excellence and ownership. Guide your team through complex challenges involving LLMs, AI agents, and large-scale distributed systems. Represent Scale AI in high-stakes negotiations and strategic discussions with senior external partners, demonstrating strong technical competence and credibility. Develop and communicate a compelling vision for Scale AI’s technology applied to your program. Provide regular updates to senior leadership and key stakeholders on progress, risks, and opportunities. Foster a culture of speed, unity of purpose, resilience, and teamwork.</p>
<p>Requirements: 10+ years of software engineering experience, including 5+ years in a technical leadership or staff role. Deep understanding of modern AI/ML technologies, including experience working with LLMs and AI agents. Proficient in one or more modern programming languages (Python, JavaScript/TypeScript). Hands-on experience with Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Strong product and business sense, with a track record of aligning engineering efforts with company goals. Ability to operate effectively in ambiguous, fast-changing environments and guide your team to do the same. Experience in executive level engagement with industry partners and Public Sector customers</p>
<p>Success Metrics: Within 6 months: Successful demonstration of agentic AI’s mission value in high-stakes customer demonstrations Establish Scale AI as the preferred agentic AI partner for the PEO Establish high velocity, agile engineering cadence both internally and with our industry partners</p>
<p>Within 12–18 months: Secure follow-on contract award with expanded scope for Scale Position Scale AI as the global AI leader in this mission area Establish developed solutions as Scale product offerings to deliver on future contracts</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$257,000-$321,000 USD</Salaryrange>
      <Skills>software engineering, technical leadership, AI/ML technologies, LLMs, AI agents, Kubernetes, cloud infrastructure, Python, JavaScript/TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599202005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd7327f8-fcf</externalid>
      <Title>Staff Software Engineer, Full-Stack - Enterprise Gen AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a frontend-focused full-stack engineer to help build AI-powered applications that redefine enterprise workflows and push the boundaries of interactive AI. As a staff software engineer, you&#39;ll work on a mix of cutting-edge customer-facing AI applications and internal SaaS products. Our engineering team powers projects like TIME&#39;s Person of the Year AI experience, where our AI technology helped shape one of the most iconic features in media. You&#39;ll also contribute to Scale&#39;s GenAI Platform (SGP), a powerful system that enables businesses to build and deploy AI agents at scale.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and enhancing user-facing AI applications for major enterprise customers, including high-profile media and Fortune 500 companies</li>
<li>Developing and refining features for Scale&#39;s GenAI Platform, empowering businesses to build, deploy, and manage AI-driven agents</li>
<li>Designing, building, and optimizing polished, high-performance UIs using Next.js, React, TypeScript, and Tailwind</li>
<li>Working closely with product managers, designers, and AI/ML teams to create seamless, intuitive, and impactful user experiences</li>
<li>Integrating frontend applications with backend services, working with APIs, authentication systems, and cloud-based infrastructure</li>
</ul>
<p>In this role, you&#39;ll have the opportunity to shape the future of AI-powered user experiences, working on projects that impact millions of users while developing tools that empower businesses to deploy AI at scale.</p>
<p>The base salary range for this full-time position in our hub locations of San Francisco, New York, or Seattle is $248,400,$310,500 USD. Compensation packages at Scale include base salary, equity, and benefits. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,400—$310,500 USD</Salaryrange>
      <Skills>Next.js, React, TypeScript, Tailwind, AI/ML, APIs, Authentication systems, Cloud-based infrastructure, FastAPI, PostgreSQL, GraphQL, AWS, Azure, GCP, Data-rich web platforms, Interactive AI applications, Agent-based systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4529529005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>901202b0-bfa</externalid>
      <Title>Product Security Engineer - Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>
<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>
<p>You will:</p>
<ul>
<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>
<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>
<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>
<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>
<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>
<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>
<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>
<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>
<li>Strong understanding of modern Javascript application design.</li>
<li>Production experience with Kubernetes backed services</li>
<li>Hands-on experience with SAST and DAST tools and methodologies.</li>
<li>Familiarity with terraform orchestration for infrastructure management.</li>
<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>
<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>
<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>
<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>The base salary range for this full-time position in the location of Washington DC/Hawaii is: $205,700-$257,400 USD</p>
<p>The base salary range for this full-time position in the location of St. Louis/Suffolk is: $171,600-$214,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205,700-$257,400 USD (Washington DC/Hawaii), $171,600-$214,500 USD (St. Louis/Suffolk)</Salaryrange>
      <Skills>TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, terraform orchestration, NodeJS, modern Javascript application design, Kubernetes backed services, SAST and DAST tools and methodologies, terraform orchestration for infrastructure management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4651559005</Applyto>
      <Location>St. Louis, MO; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>460d00aa-b48</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>About the role</p>
<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, Streaming pipelines, APIs, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, Voice activity detection, Echo cancellation, Jitter buffering, Audio DSP, GPU-based serving infrastructure, Telephony, Live streaming, Video conferencing, Voice assistant platforms, Mobile audio pipelines on iOS, Android, Working alongside ML researchers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1ab22bcd-bdc</externalid>
      <Title>Product Marketing Lead, GenAI</Title>
      <Description><![CDATA[<p>At Scale, we&#39;re looking for a Product Marketing Lead to join our team. As a Product Marketing Lead, you will own positioning and messaging for Scale&#39;s GenAI data products, articulating our differentiation on data quality, delivery speed, and multimodal breadth in a way that resonates with AI researchers and technical buyers.</p>
<p>You will lead the content and social strategy for Scale Labs&#39; dedicated online presence, taking a research-native approach that earns attention and credibility from the AI community. You will build and maintain competitive intelligence and a sharp point of view on the data market, arming the team with differentiated positioning as the market evolves.</p>
<p>You will partner with Scale Labs researchers to amplify published work - leaderboards, benchmarks, and research papers - and translate it into pipeline and demand generation for the GenAI business. You will develop go-to-market strategies for new data offerings and modalities, working closely with product and sales to drive awareness and build market momentum.</p>
<p>You will collaborate cross-functionally with sales, engineering, and research to ensure consistent, compelling messaging across every customer touchpoint.</p>
<p>Ideally, you&#39;d have 5+ years of experience in product marketing, with a track record of marketing technical products to developer or research audiences. You should have strong technical fluency and passion for AI, with the ability to hold a credible conversation about AI/ML fundamentals, training, and the role of data quality in model performance.</p>
<p>You should also have experience building social-native content and community strategies on platforms like X, with an instinct for what resonates with technical practitioners. You should have excellent written communication and storytelling skills, with the ability to make complex technical concepts compelling without oversimplifying them.</p>
<p>Nice to haves include familiarity with the AI training data market, or adjacent data and infrastructure spaces, as well as an existing network or relationships within the AI research community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$155,200-$194,000 USD</Salaryrange>
      <Skills>product marketing, technical product marketing, AI/ML fundamentals, data quality, delivery speed, multimodal breadth, competitive intelligence, social media marketing, content strategy, community management, familiarity with AI training data market, adjacent data and infrastructure spaces, existing network or relationships within AI research community</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4675758005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>acd28d99-495</externalid>
      <Title>Manager, Sales Development - EMEA</Title>
      <Description><![CDATA[<p>As a Sales Development Manager for EMEA at Anthropic, you will lead and scale our business development function across Europe, the Middle East, and Africa. You will build and manage a team of 6-8 BDRs primarily in Dublin. This role requires exceptional agility, cultural fluency across diverse European markets, and the ability to develop segment-specific strategies while navigating complex regulatory environments and regional nuances. You will be instrumental in establishing Anthropic&#39;s regional presence and building the foundation for long-term growth in EMEA.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build, lead, and scale a team of 6-8 BDRs across EMEA markets including SEU, NEU, and DACH.</li>
<li>Develop and execute region-specific prospecting strategies that account for local market dynamics, cultural nuances, and competitive landscapes across diverse European markets</li>
<li>Support all sales segments (Startups, Commercial, Enterprise) with agility to shift resources based on regional opportunities</li>
<li>Partner with regional AEs and sales leadership to align pipeline generation with territory plans and revenue targets</li>
<li>Establish KPIs and tracking mechanisms that account for regional differences while maintaining global consistency</li>
<li>Create localized training programs and enablement materials that resonate with diverse European business cultures</li>
<li>Build and maintain relationships with regional marketing teams to optimize lead quality and campaign effectiveness</li>
<li>Own regional Pipeline Reviews with sales leadership covering market-specific insights and growth opportunities</li>
<li>Navigate complex hiring and employment regulations across multiple European countries, partnering with HR and Legal</li>
<li>Coach and develop BDRs on region-specific prospecting techniques and career progression</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>3-6 years of experience managing sales development or inside sales teams in EMEA</li>
<li>Proven track record of growing and scaling teams across multiple European countries/offices</li>
<li>Experience managing distributed teams across different time zones and cultures within EMEA</li>
<li>Strong understanding of business practices, sales cycles, and decision-making processes in key EMEA markets</li>
<li>Experience adapting global sales processes for European markets while maintaining consistency</li>
<li>Strong analytical skills with ability to identify and act on regional market opportunities</li>
<li>Experience with Salesforce and sales technology stack</li>
<li>Excellent communication skills with ability to operate effectively across European cultures</li>
<li>Bachelor&#39;s degree or equivalent work experience</li>
</ul>
<p>Preferred Experience:</p>
<ul>
<li>Experience at US-headquartered technology companies expanding in EMEA</li>
<li>Background in AI/ML, cloud infrastructure, or developer platforms</li>
<li>Track record of building BDR/SDR functions from scratch in new European markets</li>
<li>Experience managing both velocity (Startup/Commercial) and strategic (Enterprise) sales motions</li>
<li>Fluency in German, French, Spanish or other major European languages</li>
<li>Network of talent for BDR hiring across EMEA markets</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€170.000-€225.000 EUR</Salaryrange>
      <Skills>Sales Development, Team Management, Strategic Planning, Market Analysis, Communication, Sales Technology Stack, Analytical Skills, AI/ML, Cloud Infrastructure, Developer Platforms, Fluency in European Languages</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5121912008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4c6f4642-054</externalid>
      <Title>Staff Product Manager, Agentic Platform</Title>
      <Description><![CDATA[<p>We are seeking a product leader to join our team and play a pivotal role in building Agentic AI platforms to support national-level decisions, including for some of the nation’s most important national security challenges. As a Staff Product Manager, you will develop enterprise-grade solutions that leverage cutting-edge AI and AI agents to drive value for public sector customers. You will work with executives at Scale and our customers to determine and execute the product strategy of the business. You will own end-to-end product development by understanding customer pain points, defining product requirements, managing development, testing, and launches. You will lead cross-functional teams including engineering, product design, operations, marketing, go-to-market, and finance. You will develop a point of view and execute on turning the solutions we build into scalable software that we can commercialize across the industry. Ideally, you&#39;d have technical experience in building ML-powered and/or enterprise-facing products, a strong understanding of generative AI technologies and their applications in public or large-scale private sector settings, experience operating in a fast-paced environment with high ambiguity, exceptional leadership, presentation, and communication skills with the ability to influence cross-functional teams, data literacy, and experience with data analytics. You will maintain a Top Secret security clearance.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. You’ll also receive benefits including, but not limited to: comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $237,600-$297,000 USD. For pay transparency purposes, the base salary range for this full-time position in the locations of Washington DC, Texas, Colorado, Hawaii is: $213,400-$267,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD; $213,400-$267,300 USD</Salaryrange>
      <Skills>generative AI technologies, public sector AI solution, software engineering principles, ML/AI application development, Top Secret security clearance, experience building infrastructure and tooling to develop and support agentic applications, experience working in startup environments building solutions for public sector/federal customers, understanding of public/federal networks, infrastructure, and deployment constraints, TS/SCI Security Clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4612403005</Applyto>
      <Location>New York, NY; San Francisco, CA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dab43521-cfa</externalid>
      <Title>Software Engineer, Robotics &amp; Autonomous Systems</Title>
      <Description><![CDATA[<p>In this role, you&#39;ll be a key contributor building production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and systems for robotics data collection, teleoperation, and model evaluation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Building real-time systems for robotic control, sensor fusion, and perception pipelines</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
<li>Collaborating with ML engineers and researchers to bring robotics research into production</li>
<li>Delivering features at high velocity while maintaining system reliability and performance</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li>3+ years of software engineering experience in robotics, autonomous vehicles, or related fields</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Practical experience with robotics frameworks (ROS/ROS2), simulation environments, or AV systems</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>Python, TypeScript, Node.js, React, C++, ROS/ROS2, simulation environments, AV systems, distributed systems, workflow orchestration, cloud infrastructure, databases, data processing, robotics hardware platforms, computer vision, SLAM, motion planning, imitation learning, autonomous vehicle data, lidar technologies, 3D data processing, ML model deployment, serving frameworks, teleoperation systems, VR interfaces, workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4618065005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>15092e66-444</externalid>
      <Title>Strategic Account Executive, GSI</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Strategic Account Executive on the GSI team, you&#39;ll own a named book of accounts and the full revenue outcome for each. You&#39;ll develop a point of view on where Claude creates the most value across a firm&#39;s practice areas, advisory services, delivery teams, and internal operations, build relationships with the partners and executives who sponsor transformation at that scale, and expand the partnership well beyond the original buyer.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own all revenue outcomes for a named book of GSI accounts, driving both new logo acquisition and multi-practice expansion through complex, multi-quarter sales cycles involving partner-led approval, global procurement, and custom commercial terms</li>
<li>Develop a clear thesis for each priority firm , where Claude creates value across knowledge management, advisory workflows, deliverable generation, and client engagements , and execute a sequenced engagement plan across practices, regions, and stakeholders</li>
<li>Build and independently advance executive relationships with Managing Partners, Practice Leads, MDs, CIOs, CTOs, and Heads of AI/Digital, anchoring every conversation to their strategic priorities: utilization, leverage, realization, and billable productivity</li>
<li>Proactively create demand in unengaged practice areas and regions, using early wins as proof points to open new doors across decentralized, partner-led organizations</li>
<li>Build quantified, firm-specific business cases mapped to the GSI operating model , using their own language and metrics , that shape deals rather than justify them after the fact</li>
<li>Identify and close lighthouse partnerships that become references across the GSI landscape and set up the future sell-with motion</li>
<li>Partner cross-functionally with Product, Applied AI, Engineering, and Partnerships to inform the roadmap based on GSI buyer needs, and contribute to the playbook, proof points, and commercial structures that become the repeatable GSI motion</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of enterprise software sales experience with a track record of owning named accounts at large, complex, partner-led organizations (global SIs, strategy consultancies), managing multi-quarter sales cycles through technical evaluations, partner-led approval, and global procurement</li>
<li>Demonstrated ability to independently build and advance relationships at the Partner, MD, and C-suite level , including practice leadership and innovation/digital executives , and hold credible conversations across both technical and business audience</li>
<li>Experience building firm-specific business cases grounded in the firm&#39;s own operating metrics (utilization, leverage, realization, margin) and defending commercial terms through complex negotiations</li>
<li>Background selling platform, API, cloud infrastructure, or emerging technology into enterprises evaluating a new category</li>
<li>Genuine interest in AI and strong alignment with Anthropic&#39;s mission of responsible AI development</li>
<li>A history of growing accounts meaningfully beyond the original engagement by proactively creating demand across new practice areas, regions, and use cases</li>
</ul>
<p><strong>What Will Make You Stand Out</strong></p>
<ul>
<li>Direct experience selling into Global SI’s or strategy consultancies, and fluency in how partner-led firms operate and measure success</li>
<li>Experience as an early AE in a vertical or segment, where you helped build the sales motion rather than inherit it</li>
<li>Background selling developer platforms, cloud infrastructure, or AI/ML tooling into traditional partner-led services firms</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise software sales, Named account ownership, Complex sales cycles, Partner-led approval, Global procurement, Firm-specific business cases, Commercial terms negotiation, Platform, API, cloud infrastructure, or emerging technology sales, AI interest and alignment with Anthropic&apos;s mission, Direct experience selling into Global SI’s or strategy consultancies, Experience as an early AE in a vertical or segment, Background selling developer platforms, cloud infrastructure, or AI/ML tooling</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that develops artificial intelligence systems. It has a team of researchers, engineers, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5176036008</Applyto>
      <Location>New York City, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43952002-812</externalid>
      <Title>Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>
<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>A track record of independent ownership of successful engineering projects.</li>
<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676936005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6fc00c5-564</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Software Engineer to join our Robotics business unit, focused on solving the data bottleneck in Physical AI across Robotics, Autonomous Vehicles, and Computer Vision. As a key contributor, you&#39;ll own and architect large-scale data processing pipelines, build ML training and fine-tuning pipelines, and develop tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation.</p>
<p>In this role, you&#39;ll interact directly with robotics and AV stakeholders to understand their technical needs and drive product development. You&#39;ll also design comprehensive monitoring and evaluation frameworks for robotics models and data quality, and collaborate with ML engineers and researchers to bring robotics research into production.</p>
<p>To succeed, you&#39;ll need at least 6 years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems. You should have strong programming skills in Python and TypeScript/Node.js for production systems, experience with React and modern frontend development for 3D interfaces, and concurrent and real-time systems expertise.</p>
<p>We&#39;re looking for someone who can deliver features at high velocity while maintaining system reliability and performance, and has a track record of working with cross-functional teams including ML engineers, researchers, and customers. Strong communication skills and the ability to operate with high autonomy are essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/Node.js, React, Concurrent and real-time systems, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing at large scale, C++, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment and serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4612282005</Applyto>
      <Location>Argentina; Uruguay</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e531ea80-a7c</externalid>
      <Title>Security Risk &amp; Compliance, HIPAA</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As part of the Anthropic security department, the compliance team owns understanding security and AI safety expectations, as established by regulators, customers, and industry norms. The compliance team uses this understanding to provide direction to internal partners on the priorities of security and safety requirements they must meet.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Operate Anthropic&#39;s HIPAA compliance review program, executing on HIPAA obligations across the product portfolio.</li>
<li>Run a dedicated HIPAA review track in parallel with the Product Security Review (PSR) process, applying compliance checklist to every in-scope change and recording a complete, auditable disposition before release.</li>
<li>Build and maintain change monitoring mechanisms to catch HIPAA-relevant changes , including default setting changes and incremental updates.</li>
<li>Partner with product and engineering teams upstream to ensure HIPAA considerations are built into first releases rather than addressed as post-launch remediations.</li>
<li>Assess and document PHI data flows, infrastructure boundaries, and control coverage across Anthropic&#39;s cloud-native product environments.</li>
<li>Write, update, and enact HIPAA policies, checklists, deployment guides, and audit evidence packages.</li>
<li>Manage Business Associate Agreement (BAA) obligations and coordinate with legal and external counsel on PHI determination questions and emerging regulatory requirements.</li>
<li>Contribute to Anthropic&#39;s broader compliance program, including adjacent frameworks (SOC 2, ISO 27001, NIST 800-53) where they intersect with HIPAA obligations.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of progressive experience in compliance roles, including direct ownership of a HIPAA compliance program at a technology company</li>
<li>Evaluated PHI data flows and infrastructure boundaries in cloud-native environments (AWS, GCP, or Azure) and can assess HIPAA exposure without always needing to escalate to legal</li>
<li>Designed and operated a compliance review mechanism integrated into a product development or release process</li>
<li>Translate HIPAA technical compliance requirements into actionable workstreams for engineering and product teams</li>
<li>Deliver clear, precise compliance documentation , policies, checklists, audit evidence, deployment guides , for both technical and non-technical audiences</li>
<li>Thrive in fast-paced, ambiguous environments where you&#39;re expected to build processes from scratch and keep them working under rapid product change</li>
<li>Energized by being the organizational expert who educates and influences rather than only advises</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Worked in AI/ML or developer-platform companies and understand the unique challenges of PHI exposure in model inference and API environments</li>
<li>HITRUST CSF experience or experience mapping HIPAA requirements to HITRUST controls</li>
<li>Implemented or significantly contributed to compliance automation or GRC tooling integrations</li>
<li>Relevant certifications (CHPC, HCISPP, CISA, CISM, CISSP, or equivalent)</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Annual compensation range: $255,000-$255,000 USD</li>
<li>Minimum education: Bachelor&#39;s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$255,000 USD</Salaryrange>
      <Skills>HIPAA compliance, compliance review program, change monitoring mechanisms, PHI data flows, infrastructure boundaries, control coverage, Business Associate Agreement, legal and external counsel, compliance program, AI/ML, developer-platform, HITRUST CSF, compliance automation, GRC tooling integrations, relevant certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5160757008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d4c3fc5-2ed</externalid>
      <Title>Senior Software Engineer, Inference</Title>
      <Description><![CDATA[<p>About the role:</p>
<p>Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.</p>
<p>The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.</p>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
<li>Implementing and deploying machine learning systems at scale</li>
<li>Load balancing, request routing, or traffic management systems</li>
<li>LLM inference optimization, batching, and caching strategies</li>
<li>Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Python or Rust</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have significant software engineering experience, particularly with distributed systems</li>
<li>Are results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Want to learn more about machine learning systems and infrastructure</li>
<li>Thrive in environments where technical excellence directly drives both business results and research breakthroughs</li>
<li>Care about the societal impacts of your work</li>
</ul>
<p>Representative projects across the org:</p>
<ul>
<li>Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators</li>
<li>Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads</li>
<li>Building production-grade deployment pipelines for releasing new models to millions of users</li>
<li>Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage</li>
<li>Contributing to new inference features (e.g., structured sampling, prompt caching)</li>
<li>Supporting inference for new model architectures</li>
<li>Analyzing observability data to tune performance based on real-world production workloads</li>
<li>Managing multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Annual compensation range for this role is €235,000-€295,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235,000-€295,000 EUR</Salaryrange>
      <Skills>High-performance, large-scale distributed systems, Implementing and deploying machine learning systems at scale, Load balancing, request routing, or traffic management systems, LLM inference optimization, batching, and caching strategies, Kubernetes and cloud infrastructure (AWS, GCP), Python or Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4641822008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad5c420d-b2d</externalid>
      <Title>Senior Solutions Architect - Lakebase</Title>
      <Description><![CDATA[<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>
<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>
<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>
<p>The Lakebase product-line requires the following core technical competencies:</p>
<ul>
<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>
<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>
<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>
<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>
</ul>
<p>Impact</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>
<p>Competencies &amp; Responsibilities</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>
<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407181002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc62d58e-581</externalid>
      <Title>International Readiness Lead</Title>
      <Description><![CDATA[<p>As International Readiness Lead, you&#39;ll drive the cross-functional work that makes Claude deployable, compliant, and commercially viable in Anthropic&#39;s priority markets. You&#39;ll contribute to Anthropic&#39;s international compute strategy, develop a framework for evaluating and sequencing data residency and sovereign deployment requests, and identify and document international customer requirements for product localization.</p>
<p>You&#39;ll translate infrastructure and product capabilities into commercial propositions, partnering with Sales and Marketing to ensure international enterprise and government customers understand what Anthropic can deliver, and when. You&#39;ll serve as the internal subject matter expert on international readiness requirements, advising on deals, partnerships, and policy positions as they arise.</p>
<p>You&#39;ll build scalable processes for capturing, triaging, and acting on international product feedback so it doesn’t get lost in HQ product cycles. You&#39;ll serve as the GTM strategist for Anthropic’s mission-oriented international programs, including our approach to responsible AI deployment in democratic allied nations and our strategy for expanding access and affordability in Global South markets.</p>
<p>You&#39;ll partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation, not just a policy framework. You&#39;ll track and synthesise the competitive landscape for sovereign AI and national AI programs, surfacing implications for Anthropic’s positioning and commercial strategy.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute to Anthropic’s international compute strategy</li>
<li>Develop a framework for evaluating and sequencing data residency and sovereign deployment requests</li>
<li>Identify and document international customer requirements for product localization</li>
<li>Translate infrastructure and product capabilities into commercial propositions</li>
<li>Serve as the internal subject matter expert on international readiness requirements</li>
<li>Build scalable processes for capturing, triaging, and acting on international product feedback</li>
<li>Serve as the GTM strategist for Anthropic’s mission-oriented international programs</li>
<li>Partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–7 years in product, technical GTM, solutions engineering, or strategy roles with meaningful international scope</li>
<li>Strong working knowledge of cloud infrastructure, data residency frameworks, and enterprise compliance requirements</li>
<li>Experience working with or selling to government customers or regulated enterprises</li>
<li>Ability to synthesise complex technical, regulatory, and geopolitical constraints into clear commercial and strategic recommendations</li>
<li>Comfortable building internal processes from scratch</li>
<li>High autonomy and strong written communication</li>
<li>Direct experience with sovereign cloud programs, regulated data environments, or government AI initiatives is a plus</li>
<li>Familiarity with EU AI Act, India DPDP Act, or similar regulatory frameworks shaping enterprise AI deployment internationally is a plus</li>
<li>Experience at a hyperscaler, cloud provider, or enterprise SaaS company navigating international infrastructure decisions is a plus</li>
<li>An interest in the intersection of AI, democratic governance, and responsible technology deployment is a plus</li>
<li>Annual salary: £120,000-£170,000 GBP</li>
</ul>
<p>$190,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£120,000-£170,000 GBP
$190,000-$270,000 USD</Salaryrange>
      <Skills>Cloud infrastructure, Data residency frameworks, Enterprise compliance requirements, Government customers, Regulated enterprises, Complex technical, regulatory, and geopolitical constraints, Commercial and strategic recommendations, Internal processes, High autonomy, Strong written communication, Sovereign cloud programs, Regulated data environments, Government AI initiatives, EU AI Act, India DPDP Act, Hyperscalers, Cloud providers, Enterprise SaaS companies, International infrastructure decisions, AI, democratic governance, and responsible technology deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151939008</Applyto>
      <Location>London, UK; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2717510f-5f6</externalid>
      <Title>Transaction Principal</Title>
      <Description><![CDATA[<p>As a Transaction Principal for Europe at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our European data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems across Europe , you&#39;ll bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams, and partner closely with our Compute Markets team who own the Europe market strategy and government relationships.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the RFP and commercial sourcing process for European data center deals, managing developer outreach, proposal evaluation, and competitive selection across multiple markets</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement , coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing status</li>
<li>Develop and maintain transaction timelines, tracking critical-path items and proactively identifying risks that could impact deal closure</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
<li>Manage complex digital infrastructure development activities to a construction-ready state, through a developer or directly</li>
<li>Marry the right projects, capital stacks, and developers at the right stages</li>
<li>Navigate country-specific permitting, grid connection, and regulatory requirements that vary significantly across European markets</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint across the region</li>
<li>Partner with the Compute Markets Manager to prioritize markets, sites, and counterparties, and feed deal learnings back into Europe market strategy</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Have experience working in or across European markets, with knowledge of the regional data center and development landscape , including established FLAP-D hubs and emerging markets like the Nordics and Southern Europe</li>
<li>Are comfortable operating across multiple countries with different legal frameworks, languages, and business cultures</li>
<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Demonstrate exceptional communication skills and can coordinate effectively across time zones with US-based HQ teams and distributed European partners</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>
<li>Come from the development side of the industry rather than traditional brokerage/leasing , you understand how DC development works and how value is created (yield-on-cost, cap rates, development fees)</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements across multiple European jurisdictions</li>
<li>Understand utility coordination, power procurement, or energy considerations in data center transactions, particularly in the European context (fragmented national power markets, grid connection queues, renewable PPAs, sustainability and efficiency regulations)</li>
<li>Have familiarity with data sovereignty and regulatory considerations that influence European site selection</li>
<li>Have relationships within the European data center developer, operator, and broker ecosystem</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
</ul>
<p>Annual compensation range for this role is £225,000-£270,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£270,000 GBP</Salaryrange>
      <Skills>transaction management, commercial real estate, data center leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, term sheets, LOIs, commercial agreements, cross-functional stakeholder engagement, due diligence teams, legal counsel, network organization, platform engineers, finance, auxiliary organizations, networks, deployments, government relations, transaction timelines, critical-path items, risks, technical and operational needs, contractual terms, digital infrastructure development, construction-ready state, projects, capital stacks, developers, country-specific permitting, grid connection, regulatory requirements, transaction processes, playbooks, scalable deal execution, Europe market strategy, Compute Markets Manager, market prioritization, site prioritization, counterparty prioritization, deal learnings</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5170084008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd3b618b-96d</externalid>
      <Title>Security Labs Engineer</Title>
      <Description><![CDATA[<p>Job Title: Security Labs Engineer</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Security at Anthropic is not a compliance exercise. It is a core part of how we stay safe as we build increasingly capable systems. Our Responsible Scaling Policy commits us to launching structured security R&amp;D projects: ambitious, time-boxed experiments designed to resolve high-uncertainty questions about our long-term security posture.</p>
<p>Each project runs for roughly 6 months with defined exit criteria. Some will succeed and move toward production. Others will fail, and we&#39;ll treat that as useful signals. The questions these projects are designed to answer include:</p>
<ul>
<li>Can our core research workflows survive extreme isolation?</li>
</ul>
<ul>
<li>Can we get cryptographic guarantees where we currently rely on trust?</li>
</ul>
<ul>
<li>Can AI become our most effective security control?</li>
</ul>
<p>As a Security Labs Engineer, you own one or more projects end-to-end: scoping the experiment, building the infrastructure, coordinating across teams, running the pilot, documenting results, and where the experiment succeeds, helping scale it into production. This is 0-to-1 and 1-to-10 work.</p>
<p>Current Project Areas</p>
<p>The portfolio evolves based on what we learn. Current areas include:</p>
<ul>
<li>Designing and operating a mock high-assurance research environment: simulating what our infrastructure would look like under extreme isolation and physical security controls, with real measurement of productivity impact</li>
</ul>
<ul>
<li>Exploring cryptographic verification of model integrity using techniques like zero-knowledge proofs to provide mathematical guarantees about what is running in production</li>
</ul>
<ul>
<li>Assessing the feasibility of confidential computing across the full model lifecycle (note: this is an open question, not a committed roadmap item)</li>
</ul>
<ul>
<li>Piloting AI-assisted security tooling including vulnerability discovery, automated patching, anomaly detection, and adaptive behavioral monitoring</li>
</ul>
<ul>
<li>Prototyping API-only access regimes where even internal research workflows never touch raw model weights</li>
</ul>
<p>Part of your job is helping shape what comes next based on gaps uncovered in the current round.</p>
<p>Responsibilities</p>
<ul>
<li>Own the end-to-end execution of a Security Labs project: refine the hypothesis, design the experiment, build the prototype, run the pilot, and write up the results</li>
</ul>
<ul>
<li>Build novel security infrastructure under real time pressure: isolated clusters, hardened access controls, cryptographic verification layers, with a bias toward learning fast</li>
</ul>
<ul>
<li>Where experiments succeed, drive them toward production scale. An experiment that works on one cluster but not a hundred is not a finished result.</li>
</ul>
<ul>
<li>Work embedded with research teams (Pretraining, RL, Inference) to stress-test whether their core workflows can function under extreme security controls, and document precisely where they break</li>
</ul>
<ul>
<li>Evaluate and integrate emerging security technologies through coordination with external vendors and research groups</li>
</ul>
<ul>
<li>Turn experimental results into clear, decision-ready writeups that inform Anthropic&#39;s long-term security architecture and RSP commitments</li>
</ul>
<ul>
<li>Maintain a pain-point registry and feasibility assessment for each project, feeding directly into the design of production high-assurance environments</li>
</ul>
<ul>
<li>Help scope and prioritize the next wave of Labs projects based on what the current round uncovers</li>
</ul>
<p>Requirements</p>
<ul>
<li>7+ years of software or security engineering experience, with a solid foundation in production systems</li>
</ul>
<ul>
<li>Some of that time spent on pilots, prototypes, or applied research work where shipping a working answer to a hard question was the explicit goal</li>
</ul>
<ul>
<li>Strong programming skills in Python and at least one systems language (Go, Rust, or C/C++)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), Kubernetes, and networking fundamentals sufficient to stand up and tear down isolated environments quickly</li>
</ul>
<ul>
<li>A track record of cross-functional execution: you can walk into a room with ML researchers, infrastructure engineers, and vendors and leave with a shared plan</li>
</ul>
<ul>
<li>Clear written communication: you know how to turn six weeks of experimentation into a two-page memo someone can act on</li>
</ul>
<ul>
<li>Comfort with ambiguity and iteration, having run experiments that failed, extracted the lesson, and moved forward</li>
</ul>
<ul>
<li>Genuine curiosity about what it would actually take to defend against a nation-state-level adversary</li>
</ul>
<ul>
<li>Passion for AI safety and a real understanding of the role security plays in making frontier AI development go well</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related field, or equivalent industry experience required.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Prior experience in offensive security, red teaming, or security research, having thought adversarially about systems and knowing which threats actually matter</li>
</ul>
<ul>
<li>Familiarity with airgapped or high-side environments (classified networks, ICS/SCADA, financial trading infrastructure, or similar) and the operational realities of working inside them</li>
</ul>
<ul>
<li>Knowledge of applied cryptography: zero-knowledge proofs, attestation protocols, secure enclaves, TPMs, or confidential computing primitives</li>
</ul>
<ul>
<li>Experience with ML infrastructure (training pipelines, inference serving, model packaging) sufficient for grounded conversations with researchers about what their workflows actually need</li>
</ul>
<ul>
<li>Background building or operating security systems in environments that demand rapid iteration rather than rigid change control</li>
</ul>
<ul>
<li>Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</li>
</ul>
<p>Location</p>
<p>This role is based in our San Francisco office (500 Howard St). Several Labs projects involve physical secure facilities on-site, so expect to be in-office more frequently than Anthropic&#39;s standard 25% hybrid baseline.</p>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Comprehensive health insurance and retirement plans</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options</li>
</ul>
<ul>
<li>Professional development opportunities, including training and conference attendance</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Access to cutting-edge technology and resources</li>
</ul>
<ul>
<li>Opportunity to work on challenging and impactful projects</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>If you&#39;re excited about the opportunity to join our team and contribute to the development of secure and beneficial AI systems, please submit your application. We can&#39;t wait to hear from you!</p>
<p>Deadline to Apply</p>
<p>None, applications will be received on a rolling basis.</p>
<p>Annual Compensation Range</p>
<p>$405,000 - $485,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with the process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C/C++, Cloud infrastructure, Kubernetes, Networking fundamentals, Cross-functional execution, Clear written communication, Comfort with ambiguity and iteration, Genuine curiosity about what it would actually take to defend against a nation-state-level adversary, Passion for AI safety, Real understanding of the role security plays in making frontier AI development go well, Offensive security, Red teaming, Security research, Applied cryptography, ML infrastructure, Background building or operating security systems in environments that demand rapid iteration rather than rigid change control, Prior work at a startup, on an innovation team, or in an applied research group where shipping a working v0 to answer a real question was explicitly the goal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that specializes in developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5153564008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef6605f2-fe0</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Robotics business unit. As a key contributor, you&#39;ll build production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
</ul>
<p>Ideal candidates will have:</p>
<ul>
<li>3+ years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Concurrent and real-time systems, with special attention to timing constraints</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at large scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, Node.js, C++, React, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment, Serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4655050005</Applyto>
      <Location>Mexico City, MX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ff592ac-9d8</externalid>
      <Title>Sr. Software Engineer, Inference</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Inference team, responsible for building and maintaining critical systems that serve Claude to millions of users worldwide. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models.</p>
<p>As a Senior Software Engineer, you will be responsible for designing, implementing, and deploying large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing. You will work closely with our research team to develop new inference features and integrate new AI accelerator platforms.</p>
<p>To succeed in this role, you should have significant software engineering experience, particularly with distributed systems, and be results-oriented with a bias towards flexibility and impact. You should also be able to pick up slack, even if it goes outside your job description, and thrive in environments where technical excellence directly drives both business results and research breakthroughs.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale distributed systems, including intelligent request routing, fleet-wide orchestration, and load balancing</li>
<li>Work closely with our research team to develop new inference features and integrate new AI accelerator platforms</li>
<li>Collaborate with cross-functional teams to ensure seamless deployment and operation of our systems</li>
<li>Analyze observability data to tune performance based on real-world production workloads</li>
<li>Manage multi-region deployments and geographic routing for global customers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree or equivalent combination of education, training, and/or experience</li>
<li>Significant software engineering experience, particularly with distributed systems</li>
<li>Results-oriented with a bias towards flexibility and impact</li>
<li>Ability to pick up slack, even if it goes outside your job description</li>
<li>Thrives in environments where technical excellence directly drives both business results and research breakthroughs</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Kubernetes and cloud infrastructure (AWS, GCP)</li>
<li>Familiarity with machine learning systems and infrastructure</li>
<li>Strong communication and collaboration skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p>Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£225,000-£325,000 GBP</Salaryrange>
      <Skills>Distributed systems, Kubernetes, Cloud infrastructure, Machine learning systems, Infrastructure engineering, Python, Rust, Java, C++, Go</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5152348008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f117ca6-268</externalid>
      <Title>Senior Technical Consultant - ElasticSearch</Title>
      <Description><![CDATA[<p>As a Sr. Technical Consultant – Search, you will play a pivotal role in helping our customers realise the value of Elastic&#39;s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic&#39;s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Search platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack</li>
<li>Leading end-to-end delivery of customer engagements – from discovery and design through implementation, enablement, and optimisation</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement</li>
</ul>
<p>The ideal candidate will have 5+ years of experience as a consultant, engineer, or architect with deep expertise in Enterprise Search technologies, including Elasticsearch and related search platforms. They will also have hands-on experience designing and deploying search solutions, proficiency in at least one programming language, and knowledge of distributed search systems and large-scale infrastructure.</p>
<p>The role offers a competitive salary range of $110,900-$175,500 USD, with opportunities for growth and professional development in a dynamic and distributed company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$110,900-$175,500 USD</Salaryrange>
      <Skills>Elasticsearch, Enterprise Search, Search Architecture, Distributed Search Systems, Large-Scale Infrastructure, Programming Language, Cloud Platforms, Lucene, Databases, Linux, Java, Docker, Kubernetes, DevOps Practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a search and analytics platform for various industries.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7411526</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ba73370-831</externalid>
      <Title>Internal Audit IT Manager</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Internal Audit IT Manager, you will own end-to-end delivery of complex IT and security audits across our cloud infrastructure, security operations, and crypto-native systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end delivery of IT and security audits, from risk assessment and scoping through planning, fieldwork, testing, reporting, and issue validation,covering cloud infrastructure (AWS, GCP), security operations, identity and access management, data protection, IT asset management, vendor/third-party risk, and key in-scope products and services including blockchain infrastructure, centralized and self-hosted wallets, and cold storage.</li>
</ul>
<ul>
<li>Driving AI-enabled audit execution, designing and implementing data analytics, automation, and Generative AI solutions to modernize how we audit (e.g., continuous monitoring, anomaly detection, automated evidence retrieval, AI-assisted workpaper drafting),while maintaining rigorous human-in-the-loop validation to ensure accuracy and audit-quality conclusions.</li>
</ul>
<ul>
<li>Executing audits aligned with the multi-year IT and security audit roadmap, coordinating coverage with co-sourced partners and cross-functional risk initiatives while ensuring alignment with Coinbase&#39;s enterprise risk profile, technology strategy, and regulatory expectations across regions (US, EMEA, APAC).</li>
</ul>
<ul>
<li>Driving high-quality, risk-based findings and executive-level reporting, distilling key themes, emerging risks, and root causes into clear, concise materials for senior management and the Chief Audit Executive,ensuring findings are appropriately documented and supported by evidence.</li>
</ul>
<ul>
<li>Partnering with technology and security leadership across Engineering, Security, Infrastructure, Product, and Operations to build trusted relationships, challenge control design, and advise on pragmatic, risk-based, scalable remediation while maintaining third-line independence.</li>
</ul>
<ul>
<li>Driving disciplined issue management, ensuring timely, risk-based remediation by management, high-quality root cause analysis, and validation of remediation activities,escalating delays or thematic concerns to senior leadership as needed.</li>
</ul>
<ul>
<li>Evaluating and developing talent, assessing candidates and helping build a high-performing, technically credible audit team.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience in IT/security internal audit, technology risk, or first-line security/engineering roles with significant controls exposure.</li>
</ul>
<ul>
<li>Experience working in a fast-paced, cloud-native, or engineering-driven environment where technology and security practices evolve rapidly.</li>
</ul>
<ul>
<li>Hands-on audit experience with cloud platforms (AWS, GCP), including IAM policies, security configurations, logging/monitoring, and CI/CD pipelines.</li>
</ul>
<ul>
<li>AI-forward mindset with demonstrated experience applying Python, SQL, or AI tools to audit or security work, building workflows rather than just prompting.</li>
</ul>
<ul>
<li>Relevant professional certifications (e.g., CISA, CISSP, CIA, CISM) required; CPA or CFE a plus.</li>
</ul>
<ul>
<li>Working knowledge of key frameworks such as NIST CSF, COBIT, SOC 2, and ITIL.</li>
</ul>
<ul>
<li>High EQ and collaborative style.</li>
</ul>
<ul>
<li>Proven ability to translate complex technical findings into clear, executive-ready narratives for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to manage multiple audits and initiatives across time zones (EMEA, APAC) with minimal oversight.</li>
</ul>
<ul>
<li>Demonstrated leadership and team-development experience, including mentoring, coaching, and managing direct reports.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience auditing or building blockchain infrastructure, crypto custody, or wallet systems (hot/cold storage).</li>
</ul>
<ul>
<li>Background in a high-growth or rapidly scaling environment with complex, evolving technology stacks.</li>
</ul>
<ul>
<li>Experience with GRC platforms (Workiva, Archer, AuditBoard) or building custom audit automation tooling.</li>
</ul>
<ul>
<li>Familiarity with DORA, MiCA, or crypto-specific regulatory frameworks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,345-$195,700 USD</Salaryrange>
      <Skills>IT security, Cloud infrastructure, Security operations, Identity and access management, Data protection, IT asset management, Vendor/third-party risk, Blockchain infrastructure, Centralized and self-hosted wallets, Cold storage, AI-enabled audit execution, Data analytics, Automation, Generative AI, Continuous monitoring, Anomaly detection, Automated evidence retrieval, AI-assisted workpaper drafting, Cloud platforms, IAM policies, Security configurations, Logging/monitoring, CI/CD pipelines, Python, SQL, AI tools, NIST CSF, COBIT, SOC 2, ITIL, CISA, CISSP, CIA, CISM, CPA, CFE</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7755116</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>374022f0-c2a</externalid>
      <Title>Senior Software Engineer, Infrastructure - Platform Compute</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Software Engineer, Infrastructure - Platform Compute to join our team.</p>
<p>As a member of our Platform Product Group, you will be responsible for building a trusted, scalable, and compliant platform to operate with speed, efficiency, and quality.</p>
<p>Our teams build and maintain the platforms critical to the existence of Coinbase.</p>
<p>The Compute team builds and operates the Kubernetes platform at Coinbase, which is the primary compute orchestration infrastructure for services at Coinbase.</p>
<p>You will work towards continuously improving the scalability, reliability, efficiency, and operational experience of using Kubernetes at Coinbase, working closely with the Routing, Security, Reliability, and Observability teams (among many others).</p>
<p>Responsibilities:</p>
<ul>
<li>Build tooling and automation to make management of our Kubernetes clusters easy and reliable.</li>
</ul>
<ul>
<li>Build tooling and automation to improve the developer and operational experience of working with Kubernetes for all users.</li>
</ul>
<ul>
<li>Operationalize our Kubernetes platform so that it continues to be automated and self-healing to prevent unnecessary oncall burden.</li>
</ul>
<ul>
<li>Develop net-new Kubernetes-related capabilities for service owners at Coinbase (e.g. one off jobs, cron, different deployment strategies, support for EFS, automated right sizing).</li>
</ul>
<ul>
<li>Support our customers as they operate critical services for Coinbase in Kubernetes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 5+ years of software engineering experience and experience with Kubernetes, or similar compute orchestration systems (e.g. mesos, nomad)</li>
</ul>
<ul>
<li>Strong AWS and/or GCP infrastructure knowledge</li>
</ul>
<ul>
<li>Ability to build backend services in addition to infrastructure</li>
</ul>
<ul>
<li>Ability to hold a high bar for quality, are a self-starter, and have strong interpersonal skills</li>
</ul>
<ul>
<li>Strong problem-solving skills and ability to identify problems, determine their root cause, and see them through to solution</li>
</ul>
<ul>
<li>Ability to balance business needs with technical solutions</li>
</ul>
<ul>
<li>Has experience scaling backend infrastructure</li>
</ul>
<p>Job #: P74890</p>
<p>*Answers to crypto-related questions may be used to evaluate your on-chain experience.</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$186,065-$218,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$186,065-$218,900 USD</Salaryrange>
      <Skills>Kubernetes, AWS, GCP, Software engineering, Compute orchestration, Automation, Backend services, Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet platform.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7576764</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b13be8f-8b4</externalid>
      <Title>Product Engineer</Title>
      <Description><![CDATA[<p>At Intercom, you will be a product engineer - someone who solves real customer problems through a smart and efficient application of your technical knowledge and your tools. You’ll be part of one of our multidisciplinary product teams, where you will build both back-end and front-end systems, and work closely with designers, product managers, researchers, and data analysts.</p>
<p>We’re facing many exciting scaling challenges and we’re building a robust platform where your expertise can be applied to areas such as building a beautiful messenger composer, rule matching, deliverability, security, app availability and machine learning, to name a few.</p>
<p>As an experienced engineer you will:</p>
<p>Develop technical plans and contribute to our technical architecture as we scale our products to serve tens of millions of people every day.</p>
<p>Write Ruby code, which knits together a lot of AWS, infrastructure, platform and SaaS technologies that form the core of Intercom’s backend infrastructure</p>
<p>Ship a change to production on your first day and a feature in your first week. That “day one” change is automatically deployed to production along with 100 other deployments (on average) each weekday.</p>
<p>Build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>
<p>Grow your team’s capacity by mentoring other engineers and interviewing candidates. This is a chance to be an integral part of building and growing a team.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary and equity in a fast-growing start-up</p>
<p>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Pension scheme &amp; match up to 4%</p>
<p>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</p>
<p>Flexible paid time off policy</p>
<p>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</p>
<p>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, AWS, infrastructure, platform, SaaS technologies, high-level programming language, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6810055</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d8bd5af3-5b3</externalid>
      <Title>Enterprise Solutions Engineer</Title>
      <Description><![CDATA[<p>As an Enterprise Solutions Engineer, you will partner with Enterprise Account Executives to showcase the power of Cresta to potential customers. You will provide technical insights and solutioning to stakeholders both internal and external by understanding customers&#39; technical requirements. You will conduct product demos, manage technical validation activities, and ultimately help develop the business case for the prospect during the sales cycle.</p>
<p>Responsibilities: Qualify new sales opportunities by understanding customer requirements and converting it to Cresta technical requirements Partner with Enterprise Account Executives to discover and understand the prospect&#39;s situation and the challenges that they are experiencing Lead discovery calls with prospective customers &amp; internal cross-functional teams to build and deliver product demos. Solving problems for potential customers and demonstrating the value of Cresta product Provide your prospects with insights and learnings from your vast experience in helping customers improve their contact center and customer experience operations Drive adoption during proof of values by training individual managers and users on the Cresta solution Translate prospect use cases into brilliant technical solutions and demonstrate the path to ROI Deliver captivating product demos highlighting value propositions to get prospects excited about how Cresta will help them reach their goals Run ROI workshops to translate our solution into a financial business case proposal Provide feedback to product management about the successes and failures in the field</p>
<p>Qualifications: 4+ years of experience in customer-facing roles, with 1–3 years in a technical pre-sales capacity supporting large enterprise sales cycles Deep hands-on expertise with Conversational AI and CCaaS platforms, helping customers modernize contact center operations Known for a strong work ethic, enthusiasm, and thoughtful engagement with clients and internal teams alike Desire to practice and prepare your presentation (or demo) meticulously, as you always strive for perfection Natural problem-solver; resourceful in leveraging internal teams and cross-functional collaboration to move deals forward Fast learner with a passion for new technology and a talent for simplifying complexity for customers Experienced with Salesforce.com, contact center infrastructure, and enterprise SaaS environments You embody our core Operating Principles</p>
<p>Perks &amp; Benefits: Comprehensive medical, dental, and vision coverage with plans to fit you and your family Flexible PTO to take the time you need, when you need it Paid parental leave for all new parents welcoming a new child Retirement savings plan to help you plan for the future Remote work setup budget to help you create a productive home office Monthly wellness and communication stipend to keep you connected and balanced In-office meal program and commuter benefits provided for onsite employees</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000 – $185,000</Salaryrange>
      <Skills>Conversational AI, CCaaS platforms, Salesforce.com, Contact center infrastructure, Enterprise SaaS environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that provides a platform combining AI and human intelligence to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4906900008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>262aa1cb-01c</externalid>
      <Title>Head of Corporate Engineering</Title>
      <Description><![CDATA[<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>
<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>
<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>
<p>Responsibilities:</p>
<p>In this influential role, you will be responsible for:</p>
<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>
<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>
<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>
<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>
<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>
<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>
<p>Desired Success Outcomes</p>
<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>
<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>
<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>
<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>
<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>
<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>
<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>
<p>Qualifications:</p>
<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>
<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>
<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>
<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>
<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>
<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>
<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $265,000-$364,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,000-$364,300 USD</Salaryrange>
      <Skills>Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7293607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2139c1a4-b0e</externalid>
      <Title>Solution Architect</Title>
      <Description><![CDATA[<p>Joining Dialpad as a Solutions Architect means stepping into a pivotal role where your problem-solving prowess and genuine passion for helping others will play a crucial part in ensuring our customers are set up for access on their Day 1 with Dialpad.</p>
<p>From the initial call to the final interaction, you&#39;ll have the opportunity to dazzle and support our clients every step of the way, making a lasting impact with each connection.</p>
<p>As a Solutions Architect, you will act as a subject matter expert, provide consulting and guidance on technical solutions, and ultimately deliver a world-class experience to our customers.</p>
<p>This position reports to our Manager, Solution Architect. Must be able to work US hours.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design and configure contact center call flows, ensuring seamless integrations with business systems and accurate reporting data.</li>
<li>Collaborate with clients to understand their operational needs and tailor Dialpad solutions to optimize call handling, automation, and analytics.</li>
<li>Provision and configure desk phones, providing best practices and recommendations for an efficient setup.</li>
<li>Lead deployment efforts, overseeing call flow design, integrations, and system configurations to align with business objectives.</li>
<li>Troubleshoot issues and help identify and report bugs during deployments, ensuring a smooth transition and minimizing disruptions for end users.</li>
<li>Analyze and refine workflows, proactively identifying areas for improvement to enhance efficiency and performance.</li>
<li>Perform risk analysis and change management, ensuring project success and on-time delivery.</li>
<li>Conduct video conferencing sessions to provide expert insights, guide clients through implementation, and ensure the adoption of best practices.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Excellent grasp of various modern cloud communications platforms, such as Google G-Suite and Office 365.</li>
<li>Experience with Business VoIP Telephony Services (On-Premise or SaaS).</li>
<li>Understanding and experience with: VoIP CODECs and technologies (G.711, G.729, G.722, OPUS, H.264, VP8, VP9, WebRTC, H.323, SIP, Network Analyzers).</li>
<li>Network Infrastructure (Firewalls, Routers, Switches &amp; Wireless); WAN Technologies (MPLS, VPLS &amp; SD-WAN).</li>
<li>Data Center Technologies (Public &amp; Private Clouds).</li>
<li>Software components involved in enterprise service delivery (web servers, application servers, databases, web services, mainframes, network-attached storage, and other related technologies).</li>
<li>Proven track record in implementing CCaaS (Contact Center as a Service) and UCaaS (Unified Communications as a Service) solutions.</li>
<li>Engineering Services: Expertise in specialized technical and functional areas, including software engineering, programming languages, system integrations, and database management.</li>
</ul>
<p><strong>Why Join Dialpad</strong></p>
<ul>
<li>Work at the center of the AI transformation in business communications.</li>
<li>Build and ship agentic AI products that are redefining how companies operate.</li>
<li>Join a team where AI amplifies every employee’s impact.</li>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth.</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>
<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Google G-Suite, Office 365, Business VoIP Telephony Services, VoIP CODECs and technologies, Network Infrastructure, WAN Technologies, Data Center Technologies, Software components involved in enterprise service delivery, CCaaS (Contact Center as a Service), UCaaS (Unified Communications as a Service)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8430262002</Applyto>
      <Location>Pasig City, Metro Manila, Philippines</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1edcf0e3-e4b</externalid>
      <Title>Product Manager, Gen AI Platform</Title>
      <Description><![CDATA[<p>We are hiring Product Managers across multiple teams within our GenAI organization. These roles span both demand-side products (the tools and platforms our customers interact with) and supply-side products (the systems that power our contributor ecosystem).</p>
<p>As a Product Manager at Scale, you will sit at the intersection of these two sides, shaping the systems, tooling, and experiences that make this marketplace work at unprecedented quality and scale.</p>
<p>You will work with dedicated engineering, design, and data science teams, as well as operations, finance, growth, and customer-facing stakeholders. The problems are technically complex, the pace is fast, and the impact is measurable.</p>
<p>Whether you are on the demand side (shaping the products customers use to create and evaluate training data) or the supply side (building the systems that power our global contributor marketplace), you will own your product area end-to-end , from strategy to execution to instrumentation.</p>
<p>Scale is a growth-stage company with the resources of a well-funded leader and the urgency of a startup. PMs here operate with significant autonomy, ship frequently, and are expected to be deeply analytical and hands-on.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Set the product strategy and roadmap for your area, grounded in customer needs, data analysis, and business impact</li>
</ul>
<ul>
<li>Develop and execute a data-driven product roadmap through close collaboration with senior leadership, engineering, operations, data science, analytics, and design</li>
</ul>
<ul>
<li>Translate customer and internal-user needs into clear, well-defined functional and technical requirements backed by data analysis and deep understanding of your users</li>
</ul>
<ul>
<li>Guide and interface closely with engineering and data teams to define scope, review and refine technical capabilities, prioritize projects for release, and identify new opportunities</li>
</ul>
<ul>
<li>Build long-term instrumentation, monitoring, and evaluation capabilities for product performance tracking and insight generation</li>
</ul>
<ul>
<li>Establish business cases and projected return on investment to identify and prioritize opportunities</li>
</ul>
<ul>
<li>Partner with finance and business leaders to manage impact on the profitability and growth of the overall business</li>
</ul>
<ul>
<li>Communicate product vision, strategy, and progress to executive stakeholders and cross-functional partners</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4–10 years of experience in Product Management in the tech industry, with scope appropriate to level (L4: 4–6 yrs, L5: 6–8 yrs, L6: 8–10+ yrs)</li>
</ul>
<ul>
<li>Strong business acumen and analytical rigor, with demonstrated success driving products in ambiguous, high-growth environments</li>
</ul>
<ul>
<li>Experience translating complex technical systems into clear product strategies , comfort engaging deeply with engineering and data science teams</li>
</ul>
<ul>
<li>Excellent communication and stakeholder management skills, capable of influencing across technical and non-technical audiences</li>
</ul>
<ul>
<li>Experience building products from the ground up and iterating through the scaling journey of a business</li>
</ul>
<ul>
<li>Bachelor’s or advanced degree in a quantitative, engineering, or related discipline</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>The base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:</p>
<p>$205,600-$257,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,600-$257,000 USD</Salaryrange>
      <Skills>Product Management, Data Analysis, Business Acumen, Communication, Stakeholder Management, Technical Strategy, Engineering, Data Science, AI/ML, Data Infrastructure, Marketplace Businesses</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops the data infrastructure that powers the world&apos;s most advanced AI. It is a growth-stage company.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4675842005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>df84dd73-b74</externalid>
      <Title>Strategic Account Executive, Digital Natives - India</Title>
      <Description><![CDATA[<p>As a Strategic Account Executive for our Digital Natives segment, you&#39;ll drive GitLab&#39;s growth by helping leading digital native organisations across India adopt, implement, and expand their use of our AI-powered DevSecOps platform.</p>
<p>You&#39;ll focus on large, complex enterprise accounts, guiding customers through modernisation and DevSecOps transformations while driving pipeline generation that translates into measurable Net ARR and long-term expansion.</p>
<p>In this role, you&#39;ll use your understanding of the software development lifecycle, including continuous integration and continuous delivery (CI/CD) automation, secure development practices, and infrastructure modernisation, to connect customer stakeholders with GitLab&#39;s field organisation so GitLab is seen as a trusted, long-term partner across the full sales cycle.</p>
<p>Some examples of our projects include building and growing a territory plan focused on large, high-growth digital native organisations, from new logo prospecting through long-term account expansion on GitLab&#39;s AI-powered DevSecOps platform.</p>
<p>Key responsibilities:</p>
<ul>
<li>Lead and grow GitLab&#39;s largest and most strategic Digital Native prospects and customers across your territory, focusing on organisations building modern software products at scale</li>
</ul>
<ul>
<li>Drive the full enterprise sales cycle, from prospecting and pipeline generation through qualification, evaluation, negotiation, and close within large, complex Digital Native accounts</li>
</ul>
<ul>
<li>Provide hands-on account leadership and direction throughout the pre- and post-sales process to ensure a smooth customer experience and strong adoption of GitLab&#39;s AI-powered DevSecOps platform</li>
</ul>
<ul>
<li>Partner closely with Sales Development Representatives, Solutions Architects, Customer Success, and strategic channel partners to generate qualified opportunities, co-sell, and execute account strategies that drive new business and expansion within Digital Native organisations</li>
</ul>
<ul>
<li>Develop and maintain detailed account plans for priority Digital Native customers, including opportunity mapping, stakeholder alignment, and multi-threaded engagement across engineering, security, platform, and business leaders</li>
</ul>
<ul>
<li>Coordinate and facilitate the involvement of cross-functional GitLab team members, including sales leadership, marketing, product, and support, to progress opportunities and deliver an excellent customer experience</li>
</ul>
<ul>
<li>Prepare activity and forecast reports, contribute to forecasting and pipeline reviews, and share root cause analysis and lessons learned from wins and losses with account managers, marketing, and technical teams</li>
</ul>
<ul>
<li>Act as the voice of the customer by contributing product ideas to our public issue tracker, preparing and delivering customer-facing and internal presentations, quotes, proposals, and formal sales documents that address Digital Native business challenges and clearly communicate long-term value and outcomes</li>
</ul>
<p>What you&#39;ll bring:</p>
<ul>
<li>Deep experience driving complex B2B software sales cycles with enterprise customers, ideally in DevSecOps, software development tools, or adjacent SaaS solutions that support the software development lifecycle</li>
</ul>
<ul>
<li>Ability to prospect, build pipeline, and close new business while expanding strategic relationships within large digital native accounts across your territory</li>
</ul>
<ul>
<li>Strong understanding of modern software delivery, including continuous integration and continuous delivery (CI/CD), secure development practices, and cloud and infrastructure modernization, with the ability to connect platform capabilities to customer outcomes</li>
</ul>
<ul>
<li>Proven ability to navigate and influence complex organisations, building trusted relationships with senior stakeholders across engineering, security, operations, and business teams</li>
</ul>
<ul>
<li>Experience creating and executing account plans for priority accounts, including opportunity mapping, multi-threaded engagement, and disciplined deal and account management</li>
</ul>
<ul>
<li>Effective communication and interpersonal skills, including comfort leading customer presentations, negotiations, and executive-level conversations and coordinating internal resources to move opportunities forward</li>
</ul>
<ul>
<li>Capacity to work autonomously and asynchronously in a fully remote environment while staying aligned to shared goals, processes, and forecasting expectations across the broader GitLab team</li>
</ul>
<ul>
<li>Familiarity with forecasting, pipeline hygiene, and reporting, including sharing learnings from wins and losses to improve repeatable sales motions</li>
</ul>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>DevSecOps, software development lifecycle, continuous integration and continuous delivery (CI/CD), secure development practices, infrastructure modernization, complex B2B software sales cycles, enterprise customers, software development tools, SaaS solutions, modern software delivery, cloud and infrastructure modernization</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8435065002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>792cef6b-cf8</externalid>
      <Title>Transaction Principal</Title>
      <Description><![CDATA[<p>As a Transaction Principal for Australia at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our Australian data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems in the region , you&#39;ll bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams, and partner closely with our Compute Markets team who own the Australia market strategy and government relationships. This is not an established leasing org; you&#39;ll be building process alongside execution.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the RFP and commercial sourcing process for Australian data center deals, managing developer outreach, proposal evaluation, and competitive selection</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement , coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing status</li>
<li>Develop and maintain transaction timelines, tracking critical-path items and proactively identifying risks that could impact deal closure</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
<li>Manage complex digital infrastructure development activities to a construction-ready state, through a developer or directly</li>
<li>Marry the right projects, capital stacks, and developers at the right stages</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint in region</li>
<li>Partner with the Compute Markets Manager to prioritize sites and counterparties, and feed deal learnings back into Australia market strategy</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Have experience working in or with Australian markets, with knowledge of the local real estate and development landscape</li>
<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Demonstrate exceptional communication skills and can coordinate effectively across time zones with HQ-based teams and external partners</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>
<li>Come from the development side of the industry rather than traditional brokerage/leasing , you understand how DC development works and how value is created (yield-on-cost, cap rates, development fees)</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>
<li>Understand utility coordination, power procurement, or energy considerations in data center transactions, particularly in the Australian context (NEM, grid connection)</li>
<li>Have relationships within the Australian data center developer and broker ecosystem</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>transaction management, commercial real estate, data center leasing, infrastructure procurement, negotiation, project management, RFP processes, competitive sourcing, Australian markets, local real estate and development landscape, communication skills, data center or hyperscale infrastructure transactions, DC development, yield-on-cost, cap rates, development fees, technical requirements for AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations, Australian data center developer and broker ecosystem, corporate development, strategic partnerships, infrastructure investment, high-growth technology companies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5154345008</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>648f4814-708</externalid>
      <Title>Senior Software Engineer, Machine Learning (Commerce)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Machine Learning Engineer to join our Revenue ML team at Discord. This role sits at the intersection of Discord&#39;s two most strategic revenue pillars , our growing 1P Shop and our newly launched Game Commerce platform. You&#39;ll be the founding ML voice for commerce discovery and personalization, building systems from the ground up that power recommendations, social commerce mechanics, and marketing targeting across both first-party and third-party storefronts.</p>
<p>Your responsibilities will include:</p>
<p>Architecting and owning the ML foundations for commerce discovery: user, item, and interaction embeddings that power personalized recommendations across shop surfaces (homepage, cart, post-purchase, wishlist, and more).</p>
<p>Designing and deploying scalable real-time recommendation and ranking systems that support a growing catalog of 1P and 3P items across heterogeneous game publisher inventories.</p>
<p>Building ML-powered marketing targeting systems that identify the right users for the right campaigns , new buyer discounts, drop campaigns, weekly deals, and seasonal promotions , driving conversion without conditioning users to wait for discounts.</p>
<p>Leveraging Discord&#39;s unique social graph to build social commerce ML: gifting recipient prediction, group buying conversion modeling, and friend-group recommendations that differentiate Discord from traditional game storefronts.</p>
<p>Driving deep learning A/B testing infrastructure and model monitoring to translate experimentation results into actionable product decisions.</p>
<p>Partnering closely with Shop, Game Commerce, Revenue Infra, ML Infra, and Data Engineering teams to define ML requirements, surface integration points, and influence the commerce roadmap.</p>
<p>To be successful in this role, you will need:</p>
<p>4+ years of experience as a Machine Learning Engineer, with a track record of owning and shipping recommendation or personalization systems end-to-end.</p>
<p>Deep expertise in applied deep learning , particularly embedding models, two-tower architectures, and retrieval/ranking systems for e-commerce or content recommendation.</p>
<p>Strong proficiency in Python and deep learning frameworks (PyTorch preferred).</p>
<p>Experience building and operating real-time ML serving infrastructure at scale, including feature stores, model serving, and A/B testing frameworks.</p>
<p>Demonstrated ability to work in early-stage, high-ambiguity environments and build ML systems from the ground up, not just improve existing ones.</p>
<p>Experience translating ML evaluation metrics and experiment results into product roadmap decisions and business impact.</p>
<p>Strong cross-functional instincts , you&#39;re comfortable partnering with product, engineering, data science, and business stakeholders to align on priorities and drive execution.</p>
<p>Bonus skills include experience applying graph ML or social network signals (social affinities, community behavior) to recommendation or personalization problems, familiarity with personalized marketing systems: lifecycle targeting, audience segmentation, and campaign optimization, and familiarity with loyalty, rewards, or incentive programs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000 to $247,500 + equity + benefits</Salaryrange>
      <Skills>Machine Learning, Deep Learning, Python, PyTorch, Real-time ML serving infrastructure, Feature stores, Model serving, A/B testing frameworks, Graph ML, Social network signals, Personalized marketing systems, Loyalty, rewards, or incentive programs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including playing video games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8438033002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b05b9f90-7d3</externalid>
      <Title>Data Center Engineer, Resource Efficiency – Compute Supply</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Power &amp; Resource Efficiency Engineer, you&#39;ll sit at the intersection of IT and facilities , building the systems, models, and control loops that optimize how we allocate and consume power, cooling, and physical capacity across our TPU/GPU fleet.</p>
<p>You&#39;ll own the technical strategy for turning raw data center capacity into reliable, efficient compute, working across power topology, workload scheduling, and real-time telemetry to push utilization as close to the physical envelope as possible while maintaining our availability commitments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build models that forecast consumption across electrical and mechanical subsystems, informing capacity planning, energy procurement, oversubscription targets and risks, including statistical modeling of cluster utilization, workload profiles, and failure modes.</li>
</ul>
<ul>
<li>Design IT/OT interfaces that bridge compute orchestration with facility controls, enabling real-time telemetry across accelerator hardware, power distribution, cooling, and schedulers.</li>
</ul>
<ul>
<li>Build and operate load management systems that use power and cooling topology to enable load management and power/thermal-aware placement to maximize throughput while meeting SLOs.</li>
</ul>
<ul>
<li>Partner with data center providers to drive design optimizations and hold them accountable to SLA-grade performance standards, providing technical diligence on partner architectures.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Deep knowledge of data center power distribution and cooling architectures, and how they interact with IT load profiles. Experience with reliability engineering, SLA development, and failure-mode analysis.</li>
</ul>
<ul>
<li>Proficiency in statistical modeling and simulation for infrastructure capacity or power utilization.</li>
</ul>
<ul>
<li>Familiarity with SCADA/BMS/EPMS, telemetry pipelines, and control systems. Experience building software that bridges IT and OT.</li>
</ul>
<ul>
<li>Exposure to accelerator deployments and their power management interfaces strongly preferred.</li>
</ul>
<ul>
<li>Demand response, grid interaction, or behind-the-meter generation experience is a plus.</li>
</ul>
<ul>
<li>Ability to translate between infrastructure engineering, software teams, and external partners.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Electrical Engineering, Mechanical Engineering, Power Systems, Controls Engineering, or a related field.</li>
</ul>
<ul>
<li>5+ years of experience in data center infrastructure or facility engineering.</li>
</ul>
<ul>
<li>Demonstrated experience with data center power distribution and cooling system architectures.</li>
</ul>
<ul>
<li>Experience building or operating software-based power management, load scheduling, or control systems.</li>
</ul>
<ul>
<li>Proficiency in Python or similar languages for statistical modeling, simulation, or automation of data center infrastructure optimizations.</li>
</ul>
<ul>
<li>Familiarity with SCADA, BMS, EPMS, or industrial control systems and associated protocols (Modbus, BACnet, SNMP).</li>
</ul>
<ul>
<li>Track record of cross-functional collaboration across hardware, software, and facilities teams.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Master&#39;s or PhD in Controls, Power Systems, or related discipline and 3+ years of experience in data center infrastructure or facility engineering.</li>
</ul>
<ul>
<li>Experience with accelerator-class deployments and their power management interfaces.</li>
</ul>
<ul>
<li>Background in control theory, dynamical systems, or cyber-physical systems design.</li>
</ul>
<ul>
<li>Experience with energy storage, microgrid integration, demand response, or behind-the-meter generation.</li>
</ul>
<ul>
<li>Familiarity with reliability engineering methods.</li>
</ul>
<ul>
<li>Experience with SLA development, availability modeling, or service credit frameworks.</li>
</ul>
<ul>
<li>Exposure to ML/optimization techniques applied to infrastructure or energy systems.</li>
</ul>
<p><strong>Salary</strong></p>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p><strong>Benefits</strong></p>
<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with our team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>data center power distribution, cooling architectures, IT load profiles, reliability engineering, SLA development, failure-mode analysis, statistical modeling, simulation, infrastructure capacity, power utilization, SCADA/BMS/EPMS, telemetry pipelines, control systems, accelerator deployments, power management interfaces, demand response, grid interaction, behind-the-meter generation, Python, automation, data center infrastructure optimizations, SCADA, BMS, EPMS, industrial control systems, Modbus, BACnet, SNMP, accelerator-class deployments, control theory, dynamical systems, cyber-physical systems design, energy storage, microgrid integration, reliability engineering methods, availability modeling, service credit frameworks, ML/optimization techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It operates at massive scale, with a focus on extracting maximum compute throughput from every watt.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5159642008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5717691a-508</externalid>
      <Title>Staff Infrastructure Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are looking for a Staff Infrastructure Software Engineer to act as a primary technical lead, engineering the &#39;paved road&#39; for our knowledge retrieval and inference engines. You will define the deployment standards for Agentic workflows at scale, bridging the gap between complex AI orchestration and world-class infrastructure.</p>
<p>The ideal candidate thrives in a fast-paced environment, has a passion for both deep technical work and mentoring, and is capable of setting a long-term technical strategy for a critical domain while maintaining a strong, hands-on delivery focus.</p>
<p>You will architect and implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers.</li>
<li>Using our own data and AI platform to analyse build and test logs and metrics to identify areas for improvement.</li>
<li>Defining the architectural patterns for our multi-cloud infrastructure to support secure, reliable, and scalable Agentic workflows for enterprise customers.</li>
<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>
<li>Designing and championing highly scalable, reliable, and low-latency infrastructure and frameworks for building, orchestrating, and evaluating multi-agent systems at enterprise scale.</li>
<li>Leading the infrastructure roadmap with a strong focus on compliance, privacy, and security standards, including designing change management and data isolation strategies.</li>
<li>Owning the development and maintenance of our best-in-class Agentic observability platform (logging, metrics, tracing, and analytics) to proactively ensure system health and enable rapid incident response.</li>
<li>Driving developer efficiency by building automated tooling and championing Infrastructure-as-Code (IaC) paradigms throughout the engineering organization to improve workflows and operational efficiency.</li>
</ul>
<p>The ideal candidate has proven experience in a senior role, with 5+ years of full-time software engineering experience, and a deep understanding of modern infrastructure practices, including CI/CD, IaC (e.g., Terraform, Helm Charts), container orchestration (e.g., Kubernetes) and observability platforms (e.g., Datadog, Prometheus, Grafana).</p>
<p>Extensive experience with at least one major cloud provider (AWS, Azure, or GCP) and strong knowledge of security and compliance in enterprise environments, with a focus on access management, data isolation, and customer-specific VPC setups is required.</p>
<p>Proficiency in Python or JavaScript/TypeScript, and SQL is also necessary.</p>
<p>Bonus points for hands-on experience and a passion for working with Agents, LLMs, vector databases, and other emerging AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,200-$310,500 USD</Salaryrange>
      <Skills>Cloud computing, Infrastructure as Code, Container orchestration, Observability platforms, Security and compliance, Access management, Data isolation, Customer-specific VPC setups, Python, JavaScript/TypeScript, SQL, Agents, LLMs, Vector databases, Emerging AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599700005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1916d784-60b</externalid>
      <Title>Electrical Commissioning Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Electrical Commissioning Engineer to join our infrastructure team. In this role, you will lead and execute the commissioning, testing, and handover of critical electrical systems supporting xAI&#39;s AI supercomputing facilities.</p>
<p>Your scope includes power generation, electrical distribution, BESS, UPS, generators, switchgear, and related infrastructure. You will ensure systems are fully operational, reliable, and compliant with design intent, safety standards, and performance requirements before transitioning to operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop, implement, and maintain comprehensive commissioning plans, protocols, test procedures, checklists, and scripts for electrical systems with related control systems</li>
<li>Lead functional, integrated systems, and performance testing (including factory acceptance testing (FAT), site acceptance testing (SAT), and load bank testing) to verify system readiness and compliance with specifications</li>
<li>Coordinate with design engineers, construction teams, vendors, and operations to resolve issues, perform troubleshooting, and execute corrective actions during commissioning phases</li>
<li>Conduct thorough documentation of test results, deficiencies, punch lists, and as-built conditions; prepare commissioning reports and turnover packages for operations handover</li>
<li>Perform quality assurance reviews of system installations, calibrations, sequences of operations, and safety interlocks</li>
<li>Support startup and initial operations of power plants, substations, UPS, generators, or other datacenter critical electrical infrastructure</li>
<li>Identify risks and opportunities for improvement in commissioning processes; recommend enhancements to standards, tools, and workflows</li>
<li>Collaborate cross-functionally to ensure seamless integration of bespoke AI infrastructure systems (e.g., high-reliability power redundancy, monitoring/EPMS)</li>
<li>Maintain a strong focus on safety, compliance with industry codes and standards (NEC, NFPA, etc.), and xAI&#39;s operational excellence standards in fast-paced, high-stakes environments</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Bachelor’s degree in Electrical Engineering or a related field (or equivalent practical experience)</li>
<li>Proficient in commissioning tools and processes (e.g., test plan development, data logging, fault simulation, and integrated systems testing)</li>
<li>Experienced with power systems (medium/low voltage distribution, generators, UPS, switchgear, BESS)</li>
<li>Detail-oriented with excellent analytical, problem-solving, and troubleshooting skills under tight deadlines</li>
<li>Strong communicator capable of working effectively with multidisciplinary teams, contractors, and stakeholders</li>
<li>Comfortable in a dynamic, fast-moving environment with evolving priorities and ambitious timelines</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>5+ years of hands-on commissioning experience in mission-critical facilities (data centers, power plants, or similar high-reliability environments), with strong focus on electrical systems</li>
<li>Experience commissioning natural gas power generation, datacenter infrastructure, or AI/high-performance compute facilities</li>
<li>Familiarity with tools like SEL relays, BMS/EPMS, Navisworks, or commissioning management software</li>
<li>Knowledge of redundancy architectures (N+1, 2N), energy efficiency metrics (PUE), and safety protocols in high-voltage environments</li>
<li>Previous owner-side or client-facing commissioning role</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>commissioning tools and processes, power systems, electrical distribution, BESS, UPS, generators, switchgear, detail-oriented, analytical, problem-solving, troubleshooting, strong communicator, hands-on commissioning experience, natural gas power generation, datacenter infrastructure, AI/high-performance compute facilities, SEL relays, BMS/EPMS, Navisworks, commissioning management software, redundancy architectures, energy efficiency metrics, safety protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5082714007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e58d4e9e-165</externalid>
      <Title>Account Executive</Title>
      <Description><![CDATA[<p>We are looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion within strategic Enterprise accounts. You will be the owner of a defined territory where you will build your own pipeline, tell the Elastic Search AI story, and close complex, multi-stakeholder deals in a consumption-based model.</p>
<p>As an Enterprise Account Executive, you will be responsible for developing and executing a proactive outbound cadence that generates ≥50% of your booked opportunities. You will uncover pain, business impact, budget, and decision criteria using frameworks like MEDDPICC so you chase only the highest-confidence deals. You will craft and deliver tailored narratives and live demos that map Elastic&#39;s Search, Observability, and Security capabilities to measurable business outcomes.</p>
<p>You will collaborate with customers to build formal close plans and keep your CRM up-to-date, maintaining ≥90% forecast accuracy within ±10%. You will lead high-stakes contract and pricing discussions,defend your value, structure give/get trades, and land multi-year consumption commitments. You will position Elastic as the Search AI platform of choice by speaking fluently about cloud economics, usage-based pricing, and modern data architectures.</p>
<p>You will work hand-in-glove with Solutions Architects, Customer Success, Marketing, and RevOps to accelerate deals and drive exceptional customer outcomes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS quota-carrying success, Expert discovery and qualification skills, Compelling value storytelling, Strong negotiation chops, Technical and cloud fluency, Prior experience at an open-source or developer-centric infrastructure company, Familiarity with observability (logs, metrics, traces) or security analytics (SIEM/XDR) use cases</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability. It has a global presence and serves over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7505982</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b79d9627-55a</externalid>
      <Title>Research Engineer, Infrastructure, Training Systems</Title>
      <Description><![CDATA[<p>We&#39;re seeking an infrastructure research engineer to design and build scalable, efficient training systems for large models. As a key member of our team, you&#39;ll take ownership of the training stack end-to-end, ensuring every GPU cycle drives scientific progress. Your goal is to make experimentation and training at Thinking Machines fast and reliable, allowing our research teams to focus on science, not system bottlenecks.</p>
<p>Key responsibilities include designing, implementing, and optimizing distributed training systems, developing high-performance optimizations, and establishing standards for reliability, maintainability, and security. You&#39;ll collaborate with researchers and engineers to build scalable infrastructure and publish learnings through internal documentation, open-source libraries, or technical reports.</p>
<p>We&#39;re looking for someone who blends deep systems and performance expertise with a curiosity for machine learning at scale. A strong understanding of deep learning frameworks, such as PyTorch, and experience working on distributed training for large models are preferred. If you have a track record of improving research productivity through infrastructure design or process improvements, that&#39;s a plus.</p>
<p>This role is based in San Francisco, California, and offers a competitive salary range of $350,000 - $475,000 USD per year, depending on background, skills, and experience. We sponsor visas and offer generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD per year</Salaryrange>
      <Skills>deep learning frameworks, distributed training, high-performance optimizations, reliability, maintainability, and security, scalable infrastructure, past experience working on distributed training for large models, track record of improving research productivity through infrastructure design or process improvements, contributions to open-source ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab develops AI products, including ChatGPT and Character.ai, and contributes to open-source projects like PyTorch.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013932008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d049b67-925</externalid>
      <Title>Senior Software Engineer, Billing Platform</Title>
      <Description><![CDATA[<p>About Scale At Scale AI, our mission is to accelerate the development of AI applications.</p>
<p>We&#39;re looking for entrepreneurial Software Engineers to join our Billing Platform team. In this role, you&#39;ll have the opportunity to drive the revenue tracking and billing system for our Generative AI products.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and operate flexible and accurate financial systems</li>
<li>Work across backend, frontend, and accounting-related systems</li>
<li>Deliver at a high velocity and level of quality to engage our customers</li>
<li>Work across the entire product lifecycle from conceptualization through production</li>
<li>Be able, and willing, to multi-task and learn new technologies quickly</li>
<li>Provide critical input in the Billing team’s roadmap and technical direction</li>
<li>Work closely with cross-functional partners like finance, product, software engineers, and operations to identify opportunities for business impact, understand, refine and prioritize requirements for billing schemes and financial infrastructure.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years of software engineering experience, ideally in high-growth, product-focused environments</li>
<li>Proven track record of shipping production systems at scale</li>
<li>Drive reliability and performance across critical infrastructure systems, ensuring our platforms scale predictably and operate with high availability.</li>
<li>Strong technical depth in one or more areas: front-end frameworks, distributed systems, data infrastructure, or developer tooling</li>
<li>Experience working across the stack, ideally with React, TypeScript, Node.js, Python, MongoDB, Elasticsearch, and/or Temporal</li>
<li>Strong product sense and ability to translate ambiguous problems into technical solutions</li>
<li>Comfortable working in a fast-paced, high-ownership environment with a bias toward execution</li>
<li>Excited to join a dynamic hybrid team based in San Francisco or New York City</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>software engineering, high-growth environments, product-focused environments, front-end frameworks, distributed systems, data infrastructure, developer tooling, React, TypeScript, Node.js, Python, MongoDB, Elasticsearch, Temporal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI is a leading AI data foundry that helps fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4630325005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7dc0b69a-5b8</externalid>
      <Title>Senior Engineer, Storage Control Plane</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Storage Engineer to play a key role in designing, building, and operating the control plane for our high-performance AI storage platform. You&#39;ll help evolve CoreWeave&#39;s storage systems by building reliable, scalable, and high-throughput solutions that power some of the largest and innovative AI workloads in the world.</p>
<p>This role involves close collaboration with teams across infrastructure, compute, and platform to ensure our storage services scale automatically and seamlessly while maximizing performance and reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and implement a highly scalable multi-tenant control plane that supports CoreWeave&#39;s growing AI storage and cloud infrastructure needs.</li>
<li>Contribute to the development of exabyte-scale, S3-compatible object storage, distributed file system and integrate dedicated storage clusters into diverse customer environments.</li>
<li>Work with technologies such as RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, and distributed filesystems to optimize storage performance and efficiency.</li>
<li>Participate in efforts to improve the reliability, durability, and observability of our storage stack.</li>
<li>Collaborate with operations teams to monitor, analyze, and optimize storage systems using telemetry, metrics, and dashboards to improve performance, latency, and resilience.</li>
<li>Work cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack.</li>
<li>Share your knowledge and mentor other engineers on best practices in building distributed, high-performance systems.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>A Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>6–10 years of experience working in storage systems engineering or infrastructure.</li>
<li>Strong hands-on experience with object storage or distributed filesystems in production environments.</li>
<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar.</li>
<li>Proficiency in a systems programming language such as Go, C, or Rust.</li>
<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana).</li>
<li>Solid understanding of cloud-native infrastructure, Kubernetes, and scalable system architecture.</li>
<li>Strong debugging and problem-solving skills in distributed, high-performance environments.</li>
<li>Clear communicator, able to work collaboratively across teams and share technical insights effectively.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>object storage, distributed filesystems, RDMA, GPU Direct Storage, RoCE, InfiniBand, SPDK, cloud-native infrastructure, Kubernetes, scalable system architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4611874006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da439e6e-91e</externalid>
      <Title>Senior Commercial Account Executive, Israel</Title>
      <Description><![CDATA[<p>About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About this Role The Senior Commercial Account Executive position effectively delivers the full sales cycle from prospecting to negotiating and closing sales with new &amp; existing customers in line with business plans. Identify and progress cross-sell opportunities to maximise revenue goals. Selling new products and generating additional sales revenue through effective sales outreach activity.</p>
<p>Main Responsibilities:</p>
<ul>
<li>Develop and execute a comprehensive account/territory plan to achieve quarterly sales and annual revenue targets in a defined territory and/or account list.</li>
<li>Drive new business acquisition (new customer logos), customer expansion (upsell and cross-sell Cloudflare solutions), and renewal within your territory.</li>
<li>Build a robust sales pipeline through continual engagement and nurturing of key prospect accounts.</li>
<li>Understand customer use-cases and how they pair with Cloudflare’s portfolio solutions in order to identify new sales opportunities.</li>
<li>Craft and communicate compelling value propositions for Cloudflare services. Drive awareness through regular outbound campaigns on product and feature roadmap updates.</li>
<li>Effectively scale the territory with partners - Accurately forecast commercial outcomes by running a consistent sales process, including driving next step expectations and contract negotiations.</li>
<li>As a trusted advisor, build long-term strategic relationships with key accounts, to ensure customer adoption, retention and expansion. Regularly evaluate usage trends and articulate value to show Cloudflare impact and provide strategic recommendations during business reviews.</li>
<li>Network across different business units with each of your accounts, and multi-thread to identify and engage new divisional buyers.</li>
<li>Position Cloudflare&#39;s platform in each of your target customers, including Cloudflare One and the Connectivity Cloud to realise our full potential in every customer.</li>
<li>Operate internally as a liaison with cross-functional teams to share key customer feedback and insights to improve customer experience and further investments with Cloudflare.</li>
</ul>
<p>Direct B2B sales experience, adept at new business acquisition and account management. Experience selling a technical, cloud-based product or service - Working knowledge of the cloud infrastructure and security space - Solid understanding of computer networking and Internet functioning. Keenness for learning technical concepts/terms. Technical background in engineering, computer science, or MIS is advantageous.</p>
<p>Knowledge/Experience:</p>
<ul>
<li>Fluency in Hebrew - 6+ years of B2B selling experience and selling Enterprise Software or SaaS (network security preferred) or Hardware solutions and services to Mid-Enterprise/ Enterprise customers - Relevant direct experience, track record, and relationships within enterprise and mid-market accounts in the territory - New Business &amp; Expansion - Experience managing longer, complex sales cycles - Fast paced environment - Enterprise IT/Cyber Security background - Aptitude for learning technical concepts/terms (Technical background in engineering, computer science, or MIS a plus)</li>
</ul>
<p>What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states. 1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>B2B sales experience, New business acquisition and account management, Technical, cloud-based product or service, Cloud infrastructure and security space, Computer networking and Internet functioning, Fluency in Hebrew, Enterprise Software or SaaS (network security preferred), Hardware solutions and services to Mid-Enterprise/ Enterprise customers, Technical background in engineering, computer science, or MIS</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7095765</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f05d190-fce</externalid>
      <Title>Sr. Manager, Field Engineering - Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>A key responsibility of this role is to hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers.</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle.</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships.</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team.</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning).</li>
</ul>
<ul>
<li>5+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment.</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets.</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business.</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts.</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, technical product, digital native customers, data, analytical, and AI workloads, Solutions Architects, customer-facing teams, hiring, onboarding, and supporting team members, high-growth environment, executive alignment, accounts that guide strategic decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8496009002</Applyto>
      <Location>Colorado; Remote - California; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2aff6a46-3ea</externalid>
      <Title>Manufacturing Software Engineer, Intelligence Systems</Title>
      <Description><![CDATA[<p>As a Software Engineer in the Manufacturing Test organization, you will join a software development team tasked to ensure that we build quality products - in land, sea, and air. You will develop test executive software that can systematically and thoroughly test our products and create analytics to improve our development cycle. You will champion automation, and work to reduce operator time and instruction complexity through the use of parallel execution, data acquisition, automated deployment tools. You will be presented complex, multiplatform problems with heavy reliance on cloud data systems. In this role you’ll need to think creatively and continuously improve our methods of automation, throughput, user interfaces, and data analytics.</p>
<p>This role will be based temporarily at Santa Ana, CA for a 3 month training period before transitioning to Asheville, OH.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop applications for Windows and Linux desktop environments</li>
<li>Integrate cloud data and deployment features while maintaining user authentication and security</li>
<li>Generate automation scripts (python) for debug and prototype development</li>
<li>Triage issues, root cause failures, and coordinate next-steps</li>
<li>Partner with end-users to turn needs into features while balancing user experience with engineering constraints</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Expertise in desktop application development with WPF and C#</li>
<li>Proficient in ASP.NET, RESTful services with C# in AWS/Azure infrastructure</li>
<li>Hands-on working knowledge of a major relational database (DB2, SQL Server etc.) and/or NoSql</li>
<li>Experience working in CI/CD and designing and delivering DevOps automation for app deployment and testing</li>
<li>Bachelor’s degree in Computer Science, Computer Engineering, or related field</li>
<li>Experience working on multi-disciplinary projects, working closely with Electrical / Mechanical / Manufacturing Engineers</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>5+ years of relevant industry experience</li>
<li>Pursuing a Master’s of Computer Science or related field</li>
<li>Experience with test automation or cloud deployment tools</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>desktop application development, WPF, C#, ASP.NET, RESTful services, AWS/Azure infrastructure, relational database, NoSql, CI/CD, DevOps automation, test automation, cloud deployment tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops advanced sensors and software for various industries.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5080387007</Applyto>
      <Location>Ashville, Ohio, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e22b8bd1-f7a</externalid>
      <Title>Staff Product Manager, Serverless Workspaces</Title>
      <Description><![CDATA[<p>At Databricks, we are building the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. The Serverless Workspaces team is the engine behind Databricks&#39; shift from a &#39;configure-first&#39; to a &#39;use-now&#39; platform. We are redefining the customer onboarding experience by removing the heavy lifting of cloud infrastructure without complicated networking, storage, and cluster configuration, just instant access to data and AI.</p>
<p>You will own the strategy for this next-generation platform layer, balancing the simplicity of a SaaS experience with the control enterprise customers demand. The impact you will have:</p>
<ul>
<li>Drive the transition to Serverless: Lead the strategy to unify the journey to onboard to serverless and classic workspaces and drive 10X usage of serverless in the next year</li>
<li>Democratize Workspace Creation: Design and ship flows that allow users to spin up workspaces instantly with little friction while maintaining strict governance guardrails and company policies</li>
<li>Redefine the &#39;Getting Started&#39; experience: Lower the barrier to entry by removing the requirement for customers to manage detailed cloud infrastructure configurations before using Databricks but allowing them dial those in when they&#39;re ready</li>
<li>Solve &#39;Workspace Proliferation&#39;: Help define the tools and policies that allow Admins to confidently govern increased amounts of workspaces across the enterprise</li>
<li>Unify the Data Estate: Work closely with the Unity Catalog and Identity teams to ensure that these new serverless environments seamlessly integrate with a customer&#39;s existing data and security models</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience as a Product Manager working on cloud infrastructure, developer platforms, or SaaS foundations</li>
<li>Technical depth in Cloud Infrastructure: Familiarity with AWS, Azure, or GCP resource management (e.g. networking, compute, identity) and how to abstract that complexity for end-users</li>
<li>Passion for simplification: A track record of taking complex technical workflows (like configuring a VPC or peering) and turning them into &#39;one-click&#39; consumer-grade experiences</li>
<li>Data-driven mindset: Comfortable defining and tracking KPIs, such as &#39;Time to First Workspace&#39; or &#39;Serverless Adoption Rate,&#39; to measure success</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Cloud Infrastructure, Developer Platforms, SaaS Foundations, AWS, Azure, GCP, Networking, Compute, Identity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8420607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>193a44d6-056</externalid>
      <Title>Staff Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Full Stack Software Engineer to join our Global Public Sector team. As a key member of our team, you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>You will:</p>
<ul>
<li>Serve as the lead technical strategist for public sector engagements, converting ambiguous mission requirements into robust architectural roadmaps and guiding onsite implementation</li>
<li>Architect the fundamental frameworks for production-grade AI applications, setting the gold standard for how interactive UIs, backend systems, and AI models are integrated at scale to deliver reliable outcomes</li>
<li>Guide the evolution of cloud infrastructure, ensuring security, global scalability, and long-term system integrity across all environments</li>
<li>Direct the development of core platforms and shared services, ensuring they solve cross-cutting needs for diverse global client use cases</li>
<li>Partner with cross-functional leadership to steer the technical roadmap, mentoring senior and junior staff and ensuring all products align with a cohesive, future-proof technical architecture</li>
<li>Bridge the gap between the field and the core platform by turning real-world client lessons into the reusable patterns that power the entire engineering team</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Masters or Phd in Computer Science or equivalent deep industry experience in architecting complex, distributed systems</li>
<li>10+ years of full-stack expertise across Python, Node.js, and React, with a proven track record of designing high-scale architectures on Kubernetes and global cloud infrastructures (AWS/Azure/GCP)</li>
<li>Expert ability to design and oversee production-grade ecosystems, ensuring world-class standards for system integrity, security, and long-term scalability</li>
<li>Extensive experience deploying and troubleshooting sophisticated end-to-end solutions directly within complex, high-security client environments</li>
<li>A self-driven leader capable of resolving extreme ambiguity, mentoring senior staff, and setting the technical vision for the organization</li>
<li>A driver of asynchronous workflows and documentation-first cultures to streamline global engineering velocity and reduce friction</li>
<li>Proficient in Arabic</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Past experience working at a startup as a CTO or founding engineer or in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Proven track record of building LLM-driven solutions with the strategic foresight to anticipate landscape shifts and architect future-proof systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, React, Kubernetes, Cloud infrastructure, AI, Machine learning, Distributed systems, Cloud computing, Security, Arabic, LLM-driven solutions, Startup experience, CTO or founding engineer experience, Cross-functional experience with operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676610005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9af8d812-df8</externalid>
      <Title>AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>
<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>
<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>
<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>
<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>
<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>
<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>
<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>
<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>
<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>
<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>
<p>You have hands-on experience with one or more of the following:</p>
<p>Model training (especially transformers and LLMs).</p>
<p>Model inference at scale (again, especially transformers and LLMs).</p>
<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>
<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>
<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>
<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>
<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary, annual bonus and equity</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>
<p>Generous paid time off above statutory minimum</p>
<p>Hybrid working</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p>Fun events for employees, friends, and family!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI company that builds customer service solutions. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7824142</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>230b25df-0f4</externalid>
      <Title>Senior Software Engineer- Database Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our Database Infrastructure team. As a member of this team, you will build and operate large-scale, reliable, and performant data systems using ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</p>
<p>You will collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord. You will exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</p>
<p>You will work with a talented team of engineers who have built one of the largest communication platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and operate large-scale, reliable, and performant data systems with ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</li>
<li>Collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord.</li>
<li>Exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</li>
<li>Work with a talented team of engineers who have built one of the largest communication platforms in the world.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience with building distributed systems and datastore infrastructure.</li>
<li>Experience with highly-available and distributed databases: e.g. ScyllaDB, Cassandra, BigTable, DynamoDB, CockroachDB, Postgres w/HA, etc.</li>
<li>Proficiency with at least one statically-typed programming language: e.g. Rust, Go, Java, C, C++</li>
<li>Strong operating systems, distributed systems, and concurrency control fundamentals.</li>
<li>Familiarity with Linux internals.</li>
<li>Comfortable working in fast-paced environments.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience with Cassandra or Scylla.</li>
<li>Experience with Rust.</li>
<li>Knowledge of DevOps tools like Salt, Terraform, or Kubernetes.</li>
</ul>
<p>Why Discord?</p>
<p>Discord plays a uniquely important role in the future of gaming. We&#39;re a multi-platform, multi-generational, and multiplayer platform that helps people deepen their friendships around games and shared interests.</p>
<p>We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank.</p>
<p>Join us in our mission!</p>
<p>Your future is just a click away!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>ScyllaDB, PostgreSQL, ElasticSearch, Linux, Rust, Distributed systems, Datastore infrastructure, Highly-available and distributed databases, Operating systems, Concurrency control fundamentals, Linux internals, Cassandra, Go, Java, C, C++, DevOps tools, Salt, Terraform, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including playing video games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8200328002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>588dfb0e-611</externalid>
      <Title>Solutions Architect - Kubernetes</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in helping customers succeed with our cloud infrastructure offerings, focusing on Kubernetes solutions within high-performance compute (HPC) environments.</p>
<p>Your responsibilities will include serving as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings.</p>
<p>You will collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements.</p>
<p>You will lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>You will drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise.</p>
<p>You will act as a virtual member of CoreWeave&#39;s Kubernetes product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>You will offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture.</p>
<p>You will conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>You will stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders.</p>
<p>You will lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>You will represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>To be successful in this role, you will need to have a B.S. in Computer Science or a related technical discipline, or equivalent experience.</p>
<p>You will also need to have 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure, focusing on building distributed systems or HPC/cloud services, with an expertise focused on scalable Kubernetes solutions.</p>
<p>You will need to be fluent in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions.</p>
<p>You will need to have a proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences.</p>
<p>You will need to be familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL).</p>
<p>You will need to have experience with running large-scale Artificial Intelligence/Machine Learning (AI/ML) training and inference workloads on technologies such as Slurm and Kubernetes.</p>
<p>Preferred qualifications include code contributions to open-source inference frameworks, experience with scripting and automation related to Kubernetes clusters and workloads, experience with building solutions across multi-cloud environments, and client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>Kubernetes, Cloud Computing, High-Performance Compute (HPC), Distributed Systems, Cloud Infrastructure, Scalable Solutions, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), Slurm, Kubernetes Clusters, Code Contributions to Open-Source Inference Frameworks, Scripting and Automation Related to Kubernetes Clusters and Workloads, Building Solutions Across Multi-Cloud Environments, Client or Customer-Facing Publications/Talks on Latency, Optimization, or Advanced Model-Server Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that offers a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4557835006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cd413087-9c6</externalid>
      <Title>Data Center DCIM Program Leader - Infrastructure Operations</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Role</p>
<p>We are seeking a DCIM Program Leader to build, scale, and own our Data Center Infrastructure Management program. This leadership role is part of the Infrastructure Operations organization, which is responsible for building, scaling, and running one of the world&#39;s largest and most important cloud networks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own the successful deployment of CAPEX investments, ensuring our infrastructure scales ahead of demand.</li>
<li>Collaborate strategically with cross-functional partners including Project Managers, Capacity Planning, Finance, and Security to deliver on ambitious group initiatives.</li>
<li>Manage key third-party vendors and contractors, holding them accountable for performance and service level agreements (SLAs).</li>
<li>Drive a culture of continuous improvement by championing standardization, optimization, and automation.</li>
<li>Own incident response, root cause analysis (RCA), and executive-level communication during critical events.</li>
<li>Foster a best-in-class operations team by bringing fresh perspectives, leadership acumen, and a focus on employee engagement.</li>
</ul>
<p>Who you are:</p>
<p>You are an experienced engineering leader and DCIM expert, with a passion for building high-performing teams and a track record of driving operational excellence. You will set strategy, establish priorities, and mentor a group of top technical talent.</p>
<p>Qualifications:</p>
<ul>
<li>Education: Bachelor&#39;s degree in Information Technology, Computer Science, Business Administration, or a related field, or equivalent practical experience.</li>
<li>Experience: 7+ years in Infrastructure/Data Center Operations, including inventory control and asset lifecycle. 4+ years managing a major DCIM platform (Nlyte preferred).</li>
<li>Proven ability to lead complex, cross-functional programs and manage vendor relationships in a high-velocity environment.</li>
<li>Experience with leading change management in ERP and asset management transformation efforts.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to equip developers with the tools and systems they need to build a better web. This includes BGP surveillance, 1.1.1.1, and other projects that help the world become a better place.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>DCIM, Nlyte, ITIL, Change Management, Configuration Management, Data Center Infrastructure, Inventory Control, Asset Lifecycle, Vendor Management, Incident Response, Root Cause Analysis, Executive Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7535803</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0373d52-7fe</externalid>
      <Title>Senior IAM Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior IAM Engineer to join our team. As a Senior IAM Engineer, you will play a critical role in securing our systems and data. You will have the opportunity to work with cutting-edge IAM technologies, collaborate with cross-functional teams, and influence the development of our IAM strategy.</p>
<p>Your primary focus will be on designing and implementing identity lifecycle management, integration and orchestration, access governance, security and compliance, custom tooling, and data and AI infrastructure support. You will also be responsible for collaborating with cross-functional teams, improving provisioning and deprovisioning processes, integrating and managing IdPs within the IAM system, handling and streamlining access requests, developing and implementing IAM policies and procedures, and responding to ad-hoc requests.</p>
<p>To be successful in this role, you will need to have a strong understanding of identity lifecycle management, directory services, SSO, MFA, SCIM provisioning, and federation (SAML, OIDC, OAuth). You will also need to have experience partnering with HR, Finance, Compliance, and other cross-functional teams to design and implement IAM and enterprise solutions.</p>
<p>Additional skills and experience we&#39;d prioritize include experience with Workato or similar integration orchestrator tools, experience with Okta Workflows, certifications such as Workato or Okta Certified Professional/Administrator/Consultant, experience integrating IAM with HR systems, knowledge of compliance requirements related to IAM, and background in cloud platforms (AWS, GCP, Azure) and IAM integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scripting, Automation Mindset, APIs, Infrastructure as Code, Security Mindset, Identity and Access Management, Okta, Workday, Google Workspace, SCIM provisioning, Federation (SAML, OIDC, OAuth), Directory services, SSO, MFA, Workato, Okta Workflows, Certifications (Workato or Okta Certified Professional/Administrator/Consultant), Experience integrating IAM with HR systems, Knowledge of compliance requirements related to IAM, Background in cloud platforms (AWS, GCP, Azure) and IAM integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8393728002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>945f93f8-087</externalid>
      <Title>Engineering Manager - Vectorize</Title>
      <Description><![CDATA[<p>We are on a mission to help build a better Internet. At Cloudflare, we&#39;re not looking for people who wait for a polished roadmap; we&#39;re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with.\n\nOur culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.\n\nThe Cloudflare Vectorize team builds our managed, global vector database designed to power the next generation of AI-driven applications. Vectorize enables developers to store and query high-dimensional vector embeddings, providing the &quot;long-term memory&quot; required for Large Language Models (LLMs) and semantic search.\n\nWe are looking for an Engineering Manager to join the Vectorize team. You will lead a group of engineers who are defining how stateful AI applications are built at the edge. You will play a pivotal role in scaling Vectorize to support billions of vectors and hundreds of thousands of indexes while maintaining the performance and reliability Cloudflare is known for.\n\nYou bring a passion for making complex AI infrastructure accessible to every developer. You thrive in a fast-paced environment where you are building the foundations of the AI era. Most importantly, you have a track record of leading technical teams with a focus on high-quality execution and engineer career development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong communication skills, Leading Distributed Systems, Navigating the AI Landscape, Execution &amp; Predictability, Developer-First Mindset, Technical Leadership, Systems Programming, Search &amp; Indexing Expertise, AI/ML Infrastructure, Database Internals, Serverless Ecosystem</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7627622</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4cd384f-6ed</externalid>
      <Title>Senior Software Engineer, Release Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>
<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>
<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>
<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>
<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>
<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000 - $237,600</Salaryrange>
      <Skills>Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and makes applications more reliable.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c38cbb6f-4b7</externalid>
      <Title>Staff Software Engineer, Inference</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Inference\n\nLocation: Dublin, IE\n\nDepartment: Software Engineering - Infrastructure\n\nJob Description:\n\nAbout Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\n\nAbout the role:\n\nOur Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry&#39;s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.\n\nThe team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.\n\nAs a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.\n\nStrong candidates may also have experience with:\n\n- High-performance, large-scale distributed systems\n\n- Implementing and deploying machine learning systems at scale\n\n- Load balancing, request routing, or traffic management systems\n\n- LLM inference optimization, batching, and caching strategies\n\n- Kubernetes and cloud infrastructure (AWS, GCP)\n\n- Python or Rust\n\nYou may be a good fit if you:\n\n- Have significant software engineering experience, particularly with distributed systems\n\n- Are results-oriented, with a bias towards flexibility and impact\n\n- Pick up slack, even if it goes outside your job description\n\n- Want to learn more about machine learning systems and infrastructure\n\n- Thrive in environments where technical excellence directly drives both business results and research breakthroughs\n\n- Care about the societal impacts of your work\n\nRepresentative projects across the org:\n\n- Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators\n\n- Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads\n\n- Building production-grade deployment pipelines for releasing new models to millions of users\n\n- Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage\n\n- Contributing to new inference features (e.g., structured sampling, prompt caching)\n\n- Supporting inference for new model architectures\n\n- Analyzing observability data to tune performance based on real-world production workloads\n\n- Managing multi-region deployments and geographic routing for global customers\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:€295.000-€355.000 EUR\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\n\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\n\nCome work with us!\n\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€295.000-€355.000 EUR</Salaryrange>
      <Skills>performance optimization, distributed systems, large-scale service orchestration, intelligent request routing, LLM inference optimization, batching strategies, multi-accelerator deployments, Kubernetes, cloud infrastructure, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5150472008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a2ea62c-943</externalid>
      <Title>Research Engineer, Infrastructure, RL Systems</Title>
      <Description><![CDATA[<p>We&#39;re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models through reinforcement learning.</p>
<p>This role sits at the intersection of research and large-scale systems engineering: a builder who understands both the algorithms behind RL and the realities of distributed training and inference at scale. You&#39;ll wear many hats, from optimising rollout and reward pipelines to enhancing reliability, observability, and orchestration, collaborating closely with researchers and infra teams to make reinforcement learning stable, fast, and production-ready.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and optimise the infrastructure that powers large-scale reinforcement learning and post-training workloads.</li>
</ul>
<ul>
<li>Improve the reliability and scalability of RL training pipeline, distributed RL workloads, and training throughput.</li>
</ul>
<ul>
<li>Develop shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility for RL systems.</li>
</ul>
<ul>
<li>Collaborate with researchers to translate algorithmic ideas into production-grade training pipelines.</li>
</ul>
<ul>
<li>Build evaluation and benchmarking infrastructure that measures model progress on helpfulness, safety, and factuality.</li>
</ul>
<ul>
<li>Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.</li>
</ul>
<p>We&#39;re looking for someone with strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases. You should have a good understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</p>
<p>Experience training or supporting large-scale language models with tens of billions of parameters or more is a plus. Familiarity with monitoring and observability tools (Prometheus, Grafana, OpenTelemetry) is also a plus.</p>
<p>Logistics:</p>
<ul>
<li>Location: This role is based in San Francisco, California.</li>
</ul>
<ul>
<li>Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.</li>
</ul>
<ul>
<li>Visa sponsorship: We sponsor visas. While we can&#39;t guarantee success for every candidate or role, if you&#39;re the right fit, we&#39;re committed to working through the visa process together.</li>
</ul>
<ul>
<li>Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>deep learning frameworks, PyTorch, JAX, complex codebases, scalable AI infrastructure, large-scale language models, monitoring and observability tools, experience training or supporting large-scale language models, familiarity with monitoring and observability tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachineslab.com.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachineslab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013930008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07b35bd1-4bf</externalid>
      <Title>Forward Deployed AI Engineering Manager, GenAI Applications</Title>
      <Description><![CDATA[<p>At Scale AI, we are not just building AI tools. We are pioneering the next era of enterprise AI.</p>
<p>As businesses rush to harness the potential of Generative AI, Scale is leading the way, transforming workflows, automating complex processes, and driving real-world impact for the world’s largest enterprises and government organizations.</p>
<p>Our Scale Generative AI Platform (SGP) powers production-grade GenAI applications with foundational services, APIs, and infrastructure that accelerate adoption across industries.</p>
<p>We are looking for a technical and strategic Engineering Manager to lead our European FDE team.</p>
<p>This is a high-ownership role at a pivotal moment. You will be responsible for delivering high-impact GenAI solutions in production, leading a team that works directly with customers, and ensuring we solve real problems with clarity, speed, and excellence.</p>
<p>Why this role is unique:</p>
<ul>
<li>Right place, right time: We are moving from prototypes to production at scale. Our FDE team is on the front lines of this transition, helping customers adopt AI faster and with more confidence.</li>
</ul>
<ul>
<li>Customer-first mindset: You will foster a culture of deep customer empathy and practical problem-solving. From scoping use cases to shipping solutions, your team will be responsible for every step of the delivery lifecycle.</li>
</ul>
<ul>
<li>Strategic influence: The lessons from forward-deployed efforts directly inform our core product roadmap. You will work closely with Product and Platform teams to identify patterns, prioritize improvements, and shape the evolution of SGP.</li>
</ul>
<ul>
<li>Operational excellence: You will bring structure to delivery, improve execution, and scale our engineering operations in a fast-moving environment.</li>
</ul>
<p>This is a rare opportunity to help define how the next generation of AI applications is built and deployed.</p>
<p>If you are excited by the pace of innovation in GenAI, passionate about solving real-world problems, and ready to lead a team that is redefining enterprise AI delivery, we want to hear from you.</p>
<p>At Scale, we do not just follow AI breakthroughs. We deliver them. Join us and be part of the team shaping the future of AI in the enterprise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering management, Generative AI, cloud infrastructure, DevOps, scalable platform architecture, strategic thinking, operational rigor, communication and collaboration skills, hands-on experience building or deploying AI-powered systems, model behavior shapes user experience, leadership presence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4589592005</Applyto>
      <Location>Berlin, Germany; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>df98e6ee-27b</externalid>
      <Title>Director, VC-Backed Startup Sales (West)</Title>
      <Description><![CDATA[<p>We are seeking a Sales Leader to join our growing Startups segment. As a Director, VC-Backed Startup Sales (West), you will oversee and motivate a team of Hunting Account Executives focused on winning new logos within the VC-Backed Startup ecosystem.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Driving revenue success by owning and exceeding monthly, quarterly, and annual sales targets</li>
<li>Developing and implementing strategic plans for team development and revenue attainment</li>
<li>Building trust-based relationships with employees, customers, partners, and cross-functional teams</li>
<li>Leading with value by enabling your team to understand the commercial and business goals of your customers and how they relate to our value proposition</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>3+ years of experience as a high-growth Data/AI/Infrastructure leader with experience leading a sales team to new heights</li>
<li>Experience translating a highly technical product to business value for the C-suite</li>
<li>A track record of success executing against personal and team goals while always striving to uplevel your and your team&#39;s skills</li>
<li>Proven leadership ability to influence, develop, and empower employees to achieve personal objectives with a team-first mindset</li>
<li>Excellent written and verbal communication skills as well as experience communicating with and presenting to the C-Suite</li>
</ul>
<p>Benefits include comprehensive benefits and perks that meet the needs of all employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data/AI/Infrastructure leadership, Sales team management, Strategic planning, Relationship building, Communication skills, High-growth sales experience, Technical product knowledge, Leadership development, Team management, Business acumen</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8506402002</Applyto>
      <Location>Chicago, Illinois; Denver, Colorado; Los Angeles, California; San Francisco, California; Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95e699c2-f66</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a passionate, results-oriented sales professional to drive revenue growth calling on Enterprise accounts. As an Enterprise Account Executive, you will be responsible for securing new business and expanding existing relationships with our clients. You will plan and execute strategies and sales tactics in the following areas: generating new business, territory planning, pre-request for proposal prospecting, relationship development, pricing, presentation and delivery, negotiations, closing and executing contracts.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Establish a vision and plan to guide your long-term approach to net new logo pipeline generation.</li>
<li>Consistently deliver ARR revenue targets to support 40% YOY growth – dedication to the number and to deadlines.</li>
<li>Develop and execute sales strategies and tactics to generate pipeline, drive sales opportunities and deliver repeatable and predictable bookings.</li>
<li>Land, adopt, expand, and deepen sales opportunities with Enterprise accounts in your Region.</li>
<li>Explore the full spectrum of relationships and business possibilities across the client’s entire org chart.</li>
<li>Become known as a thought-leader in Okta’s platform.</li>
<li>Expand relationships and orchestrate complex deals across more diverse business stakeholders.</li>
<li>Holistically embrace, access, and utilize the channel/alliances to identify and open new, uncharted opportunities.</li>
<li>Work as a team for the most efficient use and deployment of resources. Provide timely and insightful input back to other corporate functions.</li>
<li>Position Okta at both the functional and “business value” level with target stakeholders.</li>
<li>Champion Okta to prospective clients at sales presentations, site visits and product demonstrations</li>
<li>Build effective working partnerships with your Okta colleagues (channel partners, sales engineering, business value management, customer first and many more globally) with humility and enthusiasm.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of a consistent track record of employment with direct field sales experience developing net new logos selling enterprise cloud software to enterprise companies.</li>
<li>Previous experience utilizing partners, channels, and alliances to sell more successfully and overachieve your quota.</li>
<li>Sold a similar complex solution software and have experience in any of the following: enterprise cloud software or infrastructure management, application development and management, security, business applications, and/or analytics.</li>
<li>Measurable track record in new business development and over achieving sales targets.</li>
<li>Experience in selling complex enterprise software solutions and ability to adapt in high growth, fast-growing, and changing environments and can adapt quickly.</li>
<li>Experience in successfully selling during market creation phase.</li>
<li>Proven track record of successfully closing six figure software cloud deals with prospects and customers in the defined territory.</li>
<li>Experience in the “C” suite, strong executive presence and polish, and excellent listening skills.</li>
<li>Experience with target account selling, solution selling, and/or consultative sales techniques; knowledge of MEDDIC and Challenger methodologies is a plus.</li>
<li>Bachelor&#39;s degree; MBA a plus or equivalent experience.</li>
</ul>
<p>The OTE range for this position for candidates located in the San Francisco Bay area is between $260,000-$390,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$260,000-$390,000 USD</Salaryrange>
      <Skills>Cloud-based identity and access management, Enterprise cloud software, Infrastructure management, Application development and management, Security, Business applications, Analytics, Sales strategies, Pipeline generation, Sales opportunities, Repeatable and predictable bookings, Net new logo pipeline generation, ARR revenue targets, Complex enterprise software solutions, High growth environments, Market creation phase, Six figure software cloud deals, Target account selling, Solution selling, Consultative sales techniques, MEDDIC and Challenger methodologies</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorization solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7629344</Applyto>
      <Location>New Jersey; New York, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b106bca-f53</externalid>
      <Title>Senior Product Engineer</Title>
      <Description><![CDATA[<p>At Intercom, you will be a product engineer - someone who solves real customer problems through a smart and efficient application of your technical knowledge and your tools.</p>
<p>You’ll be part of one of our multidisciplinary product teams, where you will build both back-end and front-end systems, and work closely with designers, product managers, researchers, and data analysts.</p>
<p>We’re facing many exciting scaling challenges and we’re building a robust platform where your expertise can be applied to areas such as building a beautiful messenger composer, rule matching, deliverability, security, app availability and machine learning, to name a few.</p>
<p>As an experienced engineer you will:</p>
<ul>
<li>Develop technical plans and contribute to our technical architecture as we scale our products to serve tens of millions of people every day.</li>
</ul>
<ul>
<li>Write Ruby code, which knits together a lot of AWS, infrastructure, platform and SaaS technologies that form the core of Intercom’s backend infrastructure</li>
</ul>
<ul>
<li>Ship a change to production on your first day and a feature in your first week. That “day one” change is automatically deployed to production along with 100 other deployments (on average) each weekday.</li>
</ul>
<ul>
<li>Build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</li>
</ul>
<ul>
<li>Grow your team’s capacity by mentoring other engineers and interviewing candidates.</li>
</ul>
<p>This is a chance to be an integral part of building and growing a team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, AWS, infrastructure, platform, SaaS technologies, Distributed systems, AI-powered developer tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is the AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7371932</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d9b7d5ae-6bf</externalid>
      <Title>Software Engineer, Distributed Systems</Title>
      <Description><![CDATA[<p>We&#39;re growing our team of passionate creatives and builders on a mission to make design accessible to all. Our platform helps teams bring ideas to life,whether you&#39;re brainstorming, creating a prototype, translating designs into code, or iterating with AI. From idea to product, Figma empowers teams to streamline workflows, move faster, and work together in real time from anywhere in the world.</p>
<p>As a Software Engineer on our Infrastructure team, you’ll help design, build, and operate the systems that power our real-time collaborative design tools used by millions of people worldwide. We’re scaling fast, and we’re looking for experienced distributed systems engineers across a variety of teams. Whether you’re passionate about storage, compute orchestration, developer tooling, networking, or real-time data systems, this role offers an opportunity to shape the technical foundation of one of the most beloved design platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain scalable and reliable infrastructure systems that support product innovation and user collaboration at scale.</li>
</ul>
<ul>
<li>Architect and evolve distributed systems including storage platforms, streaming infrastructure, and compute orchestration.</li>
</ul>
<ul>
<li>Improve developer experience by building internal platforms, CI/CD systems, build tools, and APIs.</li>
</ul>
<ul>
<li>Collaborate across product and infrastructure teams to design secure, maintainable, and performant systems.</li>
</ul>
<ul>
<li>Participate in shaping platform strategy, roadmaps, and engineering best practices across the organization.</li>
</ul>
<ul>
<li>Debug and resolve complex production issues that span services and layers of the stack.</li>
</ul>
<ul>
<li>Mentor engineers and foster a culture of collaboration, inclusivity, and technical excellence.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of Software Engineering experience, specifically in backend or infrastructure engineering.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems concepts such as sharding, replication, consistency, and eventual convergence.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (AWS, GCP, or Azure), infrastructure-as-code, and container orchestration.</li>
</ul>
<ul>
<li>Proficiency in languages such as Go, TypeScript, Python, Rust, or Ruby.</li>
</ul>
<ul>
<li>Strong system design skills and a track record of architecting resilient production systems.</li>
</ul>
<ul>
<li>Excellent communication skills, with experience collaborating across teams and mentoring others.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience scaling storage platforms (e.g., Postgres, Redis, S3, DynamoDB) or operating streaming systems like Kafka.</li>
</ul>
<ul>
<li>Background in traffic management, DDoS mitigation, or service mesh technologies (e.g., Envoy, Istio).</li>
</ul>
<ul>
<li>A history of developing complex, real-time distributed systems at scale.</li>
</ul>
<ul>
<li>A passion for building developer productivity tools, including development environments, CI/CD pipelines, and build systems.</li>
</ul>
<ul>
<li>Experience with evolving large-scale, shared developer platforms to improve reliability and developer velocity.</li>
</ul>
<ul>
<li>Strong problem-solving skills and a bias for action,especially when tackling high-impact, gritty challenges.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$153,000-$376,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native environments, infrastructure-as-code, container orchestration, Go, TypeScript, Python, Rust, Ruby, system design, resilient production systems, storage platforms, streaming infrastructure, compute orchestration, developer tooling, networking, real-time data systems, traffic management, DDoS mitigation, service mesh technologies, complex distributed systems, developer productivity tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design platform that helps teams bring ideas to life through real-time collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5552549004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f706224-663</externalid>
      <Title>Specialist Solutions Architect - Cloud Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>
<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>
<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>
</ul>
<ul>
<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>
</ul>
<ul>
<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>
</ul>
<ul>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>
</ul>
<ul>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
</ul>
<ul>
<li>Contribute to the Databricks Community</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>
</ul>
<ul>
<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>
</ul>
<ul>
<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>
</ul>
<ul>
<li>Networking: Architecture design, implementation, and performance</li>
</ul>
<ul>
<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>
</ul>
<ul>
<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>
</ul>
<ul>
<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>
</ul>
<ul>
<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>
</ul>
<ul>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<ul>
<li>Security - understanding how to secure data platforms and manage identities</li>
</ul>
<ul>
<li>Complex deployments</li>
</ul>
<ul>
<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>
</ul>
<ul>
<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>
</ul>
<ul>
<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>
</ul>
<ul>
<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>
</ul>
<ul>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
</ul>
<ul>
<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>Zone 2 Pay Range $264,000-$363,000 USD</p>
<p>Zone 3 Pay Range $264,000-$363,000 USD</p>
<p>Zone 4 Pay Range $264,000-$363,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$264,000-$363,000 USD</Salaryrange>
      <Skills>Cloud Platforms &amp; Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477197002</Applyto>
      <Location>Central - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2155c4d8-383</externalid>
      <Title>Customer Marketing, Strategic Programs</Title>
      <Description><![CDATA[<p>We are seeking a Customer Marketing, Strategic Program Manager to build and run programs that put customers at the centre of how we launch products and tell our story.</p>
<p>As a key member of the Customer Marketing team, you will combine strategic thinking with hands-on program management. You will need to deeply understand what our customers are building with Claude, translate that into compelling proof points that support launches, and orchestrate the moving pieces so those stories land on time.</p>
<p>Responsibilities:</p>
<ul>
<li>Support Anthropic launches end-to-end: customer selection, onboarding, feedback collection, and activation for launches</li>
<li>Manage our early access program end-to-end from a customer marketing perspective,coordinating co-marketing moments, collecting customer proof points during the program, and ensuring participants are set up to share via their own channels when the launch goes live</li>
<li>Work with Sales and Customer Marketing segment leads to build and maintain customer tiering and tracking in CRM,setting up automated triggers so we know when a customer hits a milestone that unlocks new engagement opportunities</li>
<li>Manage customer events programming end-to-end: selecting the right customers and speakers, building out session content with Customer Marketing segment leads, prepping speakers, and acting as the customer point of contact day-of</li>
<li>Support analyst relations efforts by identifying and preparing customers for analyst briefings and inquiry calls</li>
<li>Build and maintain voice-of-the-customer programs that channel structured feedback to Product teams, surfacing what customers need, what&#39;s working, and what isn&#39;t</li>
<li>Coordinate across Sales, Product, PMM, Events, Creative, and Communications to align customer storytelling with launch timelines and go-to-market plans</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>8+ years of experience in customer marketing, product marketing, or a role that blended both,ideally in B2B SaaS or enterprise technology</li>
<li>Experience running early access, beta, or customer preview programs tied to product launches</li>
<li>Strong cross-functional coordination skills,you&#39;ve worked across Product, Sales, Marketing, and Comms and know how to keep everyone aligned without formal authority</li>
<li>Excellent storytelling and writing skills, with the ability to translate technical customer outcomes into clear, compelling narratives</li>
<li>Experience building and managing customer reference programs or pipelines</li>
<li>A track record of working closely with product teams, whether through structured feedback programs, advisory boards, or embedded partnerships</li>
<li>Comfort working with analyst relations,you&#39;ve prepped customers for briefings or participated in AR programs before</li>
<li>Strong project management instincts; you can juggle multiple launches, events, and programs simultaneously without dropping things</li>
<li>A genuine interest in understanding what customers are building and why, not just capturing a quote and moving on</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience with customer advisory boards (CABs) or structured community programs</li>
<li>Background in the AI/ML ecosystem, particularly developer tools and infrastructure</li>
<li>Familiarity with analyst relations firms and processes (Gartner, Forrester, IDC, etc.)</li>
<li>Experience coordinating co-marketing efforts with cloud service providers or technology partners</li>
<li>A product sense,you naturally think about how customer feedback should influence roadmap and positioning</li>
</ul>
<p>The annual compensation range for this role is $255,000-$320,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$320,000 USD</Salaryrange>
      <Skills>customer marketing, product marketing, cross-functional coordination, storytelling, writing, project management, analyst relations, customer advisory boards, structured community programs, AI/ML ecosystem, developer tools, infrastructure, co-marketing efforts, cloud service providers, technology partners</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5162846008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce19e8c0-163</externalid>
      <Title>Transaction Manager</Title>
      <Description><![CDATA[<p>As a Transaction Manager at Anthropic, you&#39;ll drive the commercial sourcing and transaction execution process for our data center capacity deals. You&#39;ll lead RFP processes, negotiate term sheets, and serve as the central leader ensuring seamless stakeholder alignment from initial sourcing through lease execution.</p>
<p>This role is critical to securing the infrastructure that powers Anthropic&#39;s frontier AI systems, requiring you to bridge commercial negotiations with complex internal coordination across legal, finance, engineering, and network teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Help identify data center capacity opportunities and options through management of network relationships across data center developer, broker, and power contacts.</li>
<li>Lead the RFP and commercial sourcing process for specific data center deals, managing developer outreach, proposal evaluation, and competitive selection processes</li>
<li>Negotiate term sheets and manage the LOI process, structuring commercial terms that meet Anthropic&#39;s technical and business requirements while maintaining strong developer partnerships</li>
<li>Create the bridge from LOI to executed transaction, ensuring all commercial, technical, and legal requirements are satisfied for deal closure</li>
<li>Serve as project manager for cross-functional stakeholder engagement, coordinating due diligence teams, internal and external legal counsel, network organization, platform engineers, and finance organization to ensure alignment prior to lease execution</li>
<li>Act as the single point of contact (SPOC) for auxiliary organizations including networks, deployments, and government relations, providing regular updates on transaction progress and leasing process status</li>
<li>Develop and maintain transaction timelines, tracking critical path items and proactively identifying risks that could impact deal closure</li>
<li>Document and refine transaction processes and playbooks to enable scalable deal execution as Anthropic expands its infrastructure footprint</li>
<li>Ensure all stakeholder requirements are captured and addressed in commercial agreements, translating technical and operational needs into contractual terms</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in transaction management, commercial real estate, data center leasing, or infrastructure procurement</li>
<li>Possess a proven track record of managing complex, multi-stakeholder transactions from sourcing through execution</li>
<li>Have strong negotiation skills with experience structuring term sheets, LOIs, and commercial agreements</li>
<li>Excel at project management and can coordinate across legal, technical, finance, and operational teams simultaneously</li>
<li>Have experience with RFP processes and competitive sourcing for large-scale infrastructure or real estate transactions</li>
<li>Demonstrate exceptional communication skills, able to serve as an effective liaison between internal stakeholders and external partners</li>
<li>Are highly organized with strong attention to detail while maintaining focus on strategic deal objectives</li>
<li>Can operate effectively in fast-paced, ambiguous environments where processes are being built alongside execution</li>
<li>Have a collaborative mindset and can build trust with diverse stakeholder groups across the organization</li>
</ul>
<p>It&#39;s a bonus if you:</p>
<ul>
<li>Have experience with data center or hyperscale infrastructure transactions specifically</li>
<li>Understand technical requirements for AI/ML workloads including power density, cooling, and network connectivity</li>
<li>Have worked with legal teams on complex lease negotiations or infrastructure agreements</li>
<li>Possess familiarity with data center developer ecosystems and market dynamics</li>
<li>Have experience in high-growth technology companies managing infrastructure expansion</li>
<li>Understand utility coordination, power procurement, or energy considerations in data center transactions</li>
<li>Have a background in corporate development, strategic partnerships, or infrastructure investment</li>
</ul>
<p>The annual compensation range for this role is $365,000-$435,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>transaction management, commercial real estate, data center leasing, infrastructure procurement, RFP processes, competitive sourcing, project management, negotiation skills, term sheets, LOIs, commercial agreements, data center or hyperscale infrastructure transactions, AI/ML workloads, power density, cooling, network connectivity, utility coordination, power procurement, energy considerations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5099080008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f418117c-d57</externalid>
      <Title>Director, Engineering - Patient</Title>
      <Description><![CDATA[<p>We are looking for a Director, Engineering - Patient to lead our Patient Core Experience team. As a key member of our engineering leadership team, you will be responsible for setting the technical strategy for our core patient experience, leading three engineering managers, and scaling the org from 18 to 30+ engineers over the next 18 months. You will also co-own patient funnel metrics with your Product and Data counterparts, drive delivery of ML-powered ranking, reimagined patient onboarding, and patient activation systems, and build an engineering culture with clear standards for velocity, quality, and technical excellence.</p>
<p>Required experience includes 10+ years of software engineering experience, 5+ years managing engineering managers, and leading engineering for a consumer or marketplace product where search, matching, ranking, or personalization was core to the business. You should also have scaled an engineering org through a high-growth phase (25+ to 50+) while maintaining velocity and quality, and be technically strong enough to make sound architecture calls on ranking/ML systems, marketplace infrastructure, and consumer-facing surfaces.</p>
<p>Nice-to-have experience includes healthcare experience or other regulated industries where data sensitivity and clinical consequences raise the stakes, experience with marketplace dynamics (supply/demand balancing, multi-sided incentive design), experience building LLM-based product features (conversational interfaces, intelligent triage, AI-assisted workflows), and experience rethinking team structure or hiring profiles in response to AI productivity gains.</p>
<p>Our stack includes Python (Django/FastAPI), TypeScript/React, Elasticsearch, PostgreSQL, Redis, dbt, Snowflake, Temporal, and custom ML models. Everything runs on AWS.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$264,000 to $324,000</Salaryrange>
      <Skills>software engineering, engineering management, consumer or marketplace product, search, matching, ranking, or personalization, high-growth phase, velocity and quality, ranking/ML systems, marketplace infrastructure, consumer-facing surfaces, healthcare experience, regulated industries, marketplace dynamics, LLM-based product features, team structure, hiring profiles, AI productivity gains</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Headway</Employername>
      <Employerlogo>https://logos.yubhub.co/headway.com.png</Employerlogo>
      <Employerdescription>Headway builds technology that simplifies mental healthcare, taking the hardest parts and making them simple. It is one of the fastest-growing companies in healthcare.</Employerdescription>
      <Employerwebsite>https://www.headway.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/headway/jobs/5972731004</Applyto>
      <Location>New York, New York, United States; San Francisco, California, United States; Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4e35d55-5d1</externalid>
      <Title>Technical Program Manager, Safeguards (Infrastructure &amp; Evals)</Title>
      <Description><![CDATA[<p>Job Title: Technical Program Manager, Safeguards (Infrastructure &amp; Evals)</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Safeguards Engineering builds and operates the infrastructure that keeps Anthropic&#39;s AI systems safe in production , the classifiers, detection pipelines, evaluation platforms, and monitoring systems that sit between our models and the real world. That infrastructure needs to be not just correct, but reliable: when a safety-critical pipeline goes down or degrades, the consequences can be serious, and they can be invisible until someone looks closely.</p>
<p>As a Technical Program Manager for Safeguards Infrastructure and Evals, you&#39;ll own the operational health and forward momentum of this stack. Your primary responsibility is driving reliability , owning the incident-response and post-mortem process, ensuring SLOs are defined and met in partnership with various teams, and making sure that when things go wrong, the right people know, the right actions get taken, and those actions actually get closed out.</p>
<p>Alongside that ongoing operational rhythm, you&#39;ll coordinate the larger platform investments: migrations, eval-platform improvements, and the cross-team dependencies that connect them. This role sits at the intersection of operations and program management. It requires genuine technical depth , you need to understand how these systems work well enough to triage effectively, judge what&#39;s actually safety-critical versus what can wait, and have informed conversations with the engineers building and maintaining them. But the core of the job is keeping the machine running well and the work moving.</p>
<p>What You&#39;ll Do:</p>
<ul>
<li>Own the Safeguards Engineering ops review</li>
<li>Drive the recurring cadence that keeps the team informed and coordinated: surfacing recent incidents and failures, bringing visibility to reliability trends, and making sure the right people are in the room when decisions need to be made.</li>
<li>Drive incident tracking and post-mortem execution</li>
<li>Establish and maintain SLOs with partner teams</li>
<li>Maintain runbook quality and incident-ownership clarity</li>
<li>Drive platform migrations and infrastructure projects</li>
<li>Coordinate evals platform improvements</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Have solid technical program management experience, particularly in operational or infrastructure-heavy environments , you&#39;re comfortable owning a mix of ongoing operational cadences and discrete project work simultaneously.</li>
<li>Understand how production ML systems work well enough to triage incidents intelligently and have substantive conversations with engineers about what&#39;s going wrong and why , you don&#39;t need to write the code, but you need to follow the technical thread.</li>
<li>Are energized by closing loops. Post-mortem action items that never get done, SLOs that no one checks, runbooks that go stale , these things bother you, and you know how to build the processes and follow-ups that fix them.</li>
<li>Can work effectively across team boundaries , comfortable coordinating with partner teams (like Inference) where you don&#39;t have direct authority, and skilled at keeping shared work moving through influence and clear communication.</li>
<li>Thrive in environments where the work shifts between &#39;keep the lights on&#39; and &#39;build something new&#39; , and can context-switch between incident follow-ups and longer-horizon platform projects without dropping either.</li>
<li>Have experience with or strong interest in AI safety , you understand why the reliability of a safety-critical pipeline is a different kind of problem than the reliability of a product feature, and that distinction motivates you.</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with SRE practices, incident management frameworks, or on-call operations at scale.</li>
<li>Have worked on or with evaluation infrastructure for ML systems , understanding how evals get designed, run, and interpreted.</li>
<li>Have experience driving infrastructure migrations in complex, multi-team environments , particularly where the migration touches operational systems that can&#39;t go offline.</li>
<li>Be familiar with monitoring and alerting tooling (PagerDuty, Datadog, or equivalents) and the operational culture around them.</li>
</ul>
<p>Deadline to apply: None, applications will be received on a rolling basis.</p>
<p>The annual compensation range for this role is listed below. For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $290,000-$365,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Operational or Infrastructure-heavy environments, Production ML systems, Incident management frameworks, On-call operations, Evaluation infrastructure for ML systems, Infrastructure migrations, Monitoring and alerting tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108695008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ed46937-df6</externalid>
      <Title>Staff Developer Success Engineer - West</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>
<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>
<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>
<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>
<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>
<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>
<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8447826b-717</externalid>
      <Title>Senior Systems Integration Engineer</Title>
      <Description><![CDATA[<p>EarnIn is scaling its systems, automations, and data capabilities to power its people and protect its information. As a Senior Systems Integration Engineer, you will be a hands-on technical lead focused on Python-driven automation, building systems integrations between HRIS, Identity Provider, SaaS, and Finance Platform, and transforming operational data into actionable insights and dashboards.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, build, and maintain production-grade automations and internal tools in Python to eliminate manual work across identity, endpoint, and SaaS operations.</li>
<li>Develop resilient API integrations and event-driven workflows (webhooks, queues) with robust error handling, retries, and observability; package reusable libraries and CLIs that standardize how IT automates.</li>
<li>Codify repeatable infrastructure with Terraform; manage changes via Git and CI/CD (e.g., GitHub Actions).</li>
</ul>
<ul>
<li>Build and operate integrations between HRIS/IdP/SaaS and financial platforms (e.g., NetSuite, Carta, Expensify), ensuring data quality, lineage, and reconciliation across systems.</li>
<li>Create and maintain lightweight services that normalize and enrich data flows to power business intelligence and compliance reporting (Tableau/Power BI/Looker Studio).</li>
</ul>
<ul>
<li>Define KPIs/SLIs/SLOs for core IT services (availability, compliance, MTTR, deflection, time-to-productive-employee) and implement monitoring/alerting.</li>
<li>Build data warehouses (e.g., Databricks) and write SQL against them (e.g., BigQuery) and build self-serve dashboards for IT, Security, Finance, People Ops, and Engineering; instrument pipelines for accuracy and freshness.</li>
</ul>
<ul>
<li>Deliver repeatable, audit-ready evidence for controls via dashboards and scheduled reports.</li>
</ul>
<ul>
<li>Evaluate and deploy AI tools with guardrails to boost IT productivity; automate helpdesk workflows (triage, summarization, routing, knowledge search).</li>
<li>Define and track value metrics (adoption, deflection, CSAT, MTTR, time saved); iterate based on experiments and user feedback.</li>
</ul>
<ul>
<li>Implement and sustain controls mapped to SOC 2 and PCI (as applicable) with repeatable evidence collection.</li>
<li>Define and review SLIs/SLOs; add monitoring/alerting, config drift detection, and incident runbooks.</li>
</ul>
<ul>
<li>Lead cross-functional projects with Security, People Ops, Finance, and Engineering , from design through steady state.</li>
<li>Mentor junior engineers through design and code reviews; publish clear documentation that makes the reliable path the easy path.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, API/OpenAPI, event-driven workflows, SQL, Infrastructure as Code (Terraform), Git-based change management, security mindset</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, providing financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7703637</Applyto>
      <Location>Remote, Mexico</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7de81b4-bec</externalid>
      <Title>Security Engineer, Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Infrastructure Security Engineer to join our team. This role is integral to ensuring the security and integrity of our platform.</p>
<p>You will be responsible for securing large cloud environments, orchestrating and securing various compute clusters, and reviewing infrastructure as code. Your expertise in cloud security, infrastructure automation, and advanced security practices will be essential in maintaining and enhancing our security posture.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Securing infrastructure across large cloud hosting providers (e.g., AWS, Azure, GCP).</li>
<li>Implementing and maintaining robust security configurations and policies for cloud environments.</li>
<li>Conducting regular security assessments and audits of infrastructure to identify vulnerabilities and areas for improvement.</li>
<li>Developing and enforcing security best practices for infrastructure automation and orchestration.</li>
<li>Collaborating with DeveloperExperience, IT, and product teams to integrate security into all stages of the infrastructure lifecycle.</li>
<li>Reviewing and securing infrastructure as code (e.g., Terraform, CloudFormation).</li>
<li>Educating and mentoring team members on infrastructure security best practices and emerging threats.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Proven experience as a Security Engineer with a focus on product security.</li>
<li>Proficiency in NodeJS, TypeScript, and Kubernetes.</li>
<li>Experience with orchestrating and securing GPU clusters.</li>
<li>Proficiency in infrastructure as code tools such as Terraform and CloudFormation.</li>
<li>Excellent communication skills, with the ability to clearly explain technical concepts and their implications to both technical and non-technical stakeholders.</li>
<li>Demonstrated ability to influence security strategies and drive improvements within an organisation.</li>
<li>Relevant security certifications (e.g., AWS Certified Security Specialty, Certified Cloud Security Professional) are a plus.</li>
<li>Experience in a senior or lead security role is preferred.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>cloud security, infrastructure automation, advanced security practices, NodeJS, TypeScript, Kubernetes, Terraform, CloudFormation, orchestrating and securing GPU clusters, relevant security certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4646888005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d799d883-0dd</externalid>
      <Title>Solutions Architect- Networking</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital role in leading innovation at every turn. You will have the opportunity to demonstrate thought leadership and engage hands-on throughout our customers&#39; entire lifecycle. From establishing their Kubernetes environment to developing proofs of concept, onboarding, and optimizing workloads, you will lead innovation at every turn.</p>
<p>In this role, you will:</p>
<p>Serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on networking technologies within high-performance compute (HPC) environments Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments. Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Networking product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions. Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions. Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption. Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p>Who You Are:</p>
<p>B.S. in Computer Science or a related technical discipline, or equivalent experience 7+ years of proven experience as a Solutions Architect, engineer, researcher, or technical account manager in cloud infrastructure focusing on building distributed systems or HPC/cloud services, with an expertise focused on infrastructure networking. Fluency in cloud computing concepts, architecture, and technologies with hands-on experience in designing and implementing cloud solutions Proven track record with building customer relationships, communicating clearly and the ability to break down complex technical concepts to both technical and non-technical audiences Expertise with a broad range of networking technologies and topics, with a familiarity to understand the needs and use cases is it relates to securing and enabling high performance networking environments. Experience with managing infrastructure networking, Kubernnetes CSI management, and private networking concepts Familiar with NVIDIA GPUs typically used in AI/ML applications and associated technologies such as Infiniband and NVIDIA Collective Communications Library (NCCL)</p>
<p>Preferred:</p>
<p>Code contributions to open-source inference frameworks Experience with scripting and automation related to network technologies Experience with building solutions across multi-cloud environments Client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>cloud computing, Kubernetes, infrastructure networking, high-performance computing, networking technologies, NVIDIA GPUs, Infiniband, NVIDIA Collective Communications Library (NCCL), open-source inference frameworks, scripting and automation, multi-cloud environments, latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure provider that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4568528006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>192b8eb7-029</externalid>
      <Title>Staff iOS Engineer - B2C Native Apps</Title>
      <Description><![CDATA[<p>We are looking for a Staff iOS Engineer to join our B2C Native Apps team. As a member of this team, you will be responsible for designing, developing, and maintaining high-quality iOS applications.</p>
<p>Our team is fast-paced and agile, comprising engineers, a product manager, and designer. We work closely together to deliver innovative solutions that meet the needs of our customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop high-quality iOS applications using Swift and Objective-C</li>
<li>Collaborate with the product manager and designer to define and prioritize features</li>
<li>Work with the engineering team to ensure seamless integration with other components</li>
<li>Participate in code reviews and contribute to the improvement of our codebase</li>
<li>Mentor junior engineers and help them grow in their careers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of professional iOS development experience</li>
<li>Excellent communication and collaboration skills</li>
<li>Experience building public or internal mobile APIs/SDKs and working with Swift and Objective-C</li>
<li>Experience with UIKit, SwiftUI, programmatic Auto Layout, and iOS design patterns (MVVM, reactive programming)</li>
<li>Experience with Unit/UI/integration/performance testing on iOS (Quick, Nimble, XCTest, XCUITest, etc.)</li>
<li>Experience with Realm database or similar mobile NoSQL solutions</li>
<li>End-to-end ownership of mobile applications or SDKs</li>
<li>Experience with mobile CI/CD pipelines (GitHub Actions)</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>1+ years of experience in identity and access management (IAM) domain, particularly with Auth0 Guardian SDK or similar MFA/authentication solutions</li>
<li>Experience with iOS security best practices, including cryptography (RSA, CommonCrypto), biometric authentication (Face ID/Touch ID), iOS Keychain, Authentication Service framework, and secure data storage</li>
<li>Experience with reactive programming frameworks (ReactiveSwift, Combine) and migrating legacy architectures to MVVM patterns</li>
<li>Experience with infrastructure-as-code tools (e.g., Fastlane, Swift Package Manager, Snyk, or Terraform)</li>
</ul>
<p>If you are a motivated and experienced iOS engineer looking to join a dynamic team, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>iOS development, Swift, Objective-C, UIKit, SwiftUI, programmatic Auto Layout, iOS design patterns, MVVM, reactive programming, Unit/UI/integration/performance testing, Realm database, mobile NoSQL solutions, end-to-end ownership, mobile CI/CD pipelines, identity and access management, Auth0 Guardian SDK, MFA/authentication solutions, iOS security best practices, cryptography, biometric authentication, iOS Keychain, Authentication Service framework, secure data storage, reactive programming frameworks, infrastructure-as-code tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions. It was founded in 2009 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7598837</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78ae8204-779</externalid>
      <Title>Senior Staff Software Engineer, Solana Staking Protocol</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Staff Software Engineer to serve as Coinbase&#39;s Solana Staking Protocol CTO , the definitive technical authority on all things Solana staking across the company.</p>
<p>This is not a typical engineering role. You will combine deep Solana protocol mastery with strategic technical leadership to shape Coinbase&#39;s Solana staking trajectory for years to come.</p>
<p>You will own the technical strategy across validator operations, staking integrations, and protocol evolution , partnering directly with engineering leadership, product teams, and external ecosystem players including the Solana Foundation.</p>
<p>You will represent Coinbase on the world stage as a recognized Solana expert, speaking at conferences, engaging with the validator community, and influencing protocol direction.</p>
<p>Internally, you will be the go-to expert for any Solana staking technical decision, from runtime-level optimizations to cross-product integration strategy.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Define Solana Staking Strategy</strong></p>
<p>Own and drive Coinbase&#39;s multi-year technical strategy for Solana staking across validator performance, protocol participation, and product integration.</p>
<p>Connect engineering decisions to business outcomes including yield optimization, cost efficiency, and customer growth.</p>
<p><strong>Maximize Validator Performance</strong></p>
<p>Lead the engineering effort to achieve industry-leading APY through validator optimization , including vote accuracy, block production, MEV strategies, commission tuning, and stake distribution.</p>
<p>Build systems and tooling that give Coinbase a durable performance edge.</p>
<p><strong>Own Protocol Expertise</strong></p>
<p>Serve as Coinbase&#39;s foremost authority on the Solana runtime, consensus mechanism, staking economics, and validator client landscape (Agave, Firedancer, etc.).</p>
<p>Evaluate protocol upgrades (e.g., SIMD proposals), assess risks, and proactively position Coinbase for changes before they land.</p>
<p><strong>Drive Cross-Product Integration</strong></p>
<p>Partner with Retail Staking and Institutional Staking product and engineering teams to architect scalable staking integrations across Coinbase&#39;s product surface area.</p>
<p>Ensure Solana staking is deeply embedded and differentiated in every Coinbase staking product.</p>
<p><strong>Build External Presence &amp; Influence</strong></p>
<p>Represent Coinbase in the Solana ecosystem.</p>
<p>Maintain deep relationships with the Solana Foundation, core development teams, other major validators, and ecosystem partners.</p>
<p>Speak at major conferences (Breakpoint, etc.) and contribute to protocol governance.</p>
<p>Be Coinbase&#39;s voice on Solana staking.</p>
<p><strong>Lead Technical Execution</strong></p>
<p>Write production code.</p>
<p>Design and build critical infrastructure for validator operations, monitoring, automation, and reliability.</p>
<p>Set the technical bar for the team , code reviews, architecture decisions, incident response.</p>
<p><strong>Expand Beyond Staking</strong></p>
<p>Serve as a technical advisor on non-staking Solana initiatives where deep protocol knowledge is required (e.g., Solana tax infrastructure, token programs, new Solana-based products).</p>
<p><strong>Mentor and Scale the Team</strong></p>
<p>Elevate a team of strong engineers (IC4-IC5) through mentorship, architectural guidance, and raising the bar on Solana-specific domain expertise.</p>
<p>Define what great Solana engineering looks like at Coinbase.</p>
<p><strong>Requirements</strong></p>
<p><strong>Deep Solana Protocol Expertise</strong></p>
<p>You have extensive, hands-on experience with Solana&#39;s architecture , Eg: the runtime, validator mechanics, staking economics, consensus (Tower BFT), turbine, Gulf Stream, and the validator client ecosystem.</p>
<p>You understand Solana at the source-code level, not just the API level.</p>
<p><strong>Technical Authority &amp; Execution</strong></p>
<p>You are a strong IC7-caliber engineer.</p>
<p>You design and build complex distributed systems.</p>
<p>You write production code in Rust and/or Go.</p>
<p>You have deep experience with infrastructure at scale , bare metal, cloud, networking, observability.</p>
<p><strong>Strategic Vision</strong></p>
<p>You can define year-long technical strategies and connect them to business goals.</p>
<p>You break down ambiguous, large-scope problems into executable plans with measurable milestones.</p>
<p>You think in terms of competitive advantage, not just engineering correctness.</p>
<p><strong>Ecosystem Presence &amp; Influence</strong></p>
<p>You are a known figure in the Solana ecosystem.</p>
<p>You have existing relationships with the Solana Foundation, core contributor teams, and major validators.</p>
<p>You have a track record of public speaking, community engagement, or protocol governance participation.</p>
<p><strong>Cross-Functional Leadership</strong></p>
<p>You partner effectively with product, business, and executive stakeholders.</p>
<p>You translate complex protocol dynamics into business-relevant terms for non-technical audiences.</p>
<p>You drive alignment across multiple teams and functions.</p>
<p><strong>Passion for Solana</strong></p>
<p>This isn&#39;t a role for a generalist who happens to know some Solana.</p>
<p>You are genuinely passionate about the Solana ecosystem, follow protocol developments closely, and have a strong thesis on where Solana staking is headed.</p>
<p><strong>Ability to Responsibly Use Generative AI Tools</strong></p>
<p>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>
<p><strong>Nice to Have</strong></p>
<p><strong>Core Contributor to Solana Validator Clients</strong></p>
<p>Core contributor to Solana validator clients (Agave, Firedancer) or significant Solana ecosystem projects.</p>
<p><strong>Experience Operating in Highly Regulated Industries</strong></p>
<p>Experience operating in highly regulated industries or security-first cultures.</p>
<p><strong>Background in Financial Services</strong></p>
<p>Background in financial services, fintech, or crypto custody.</p>
<p><strong>Track Record of Publishing Technical Content</strong></p>
<p>Track record of publishing technical content (blog posts, research, conference talks) on Solana or Blockchain in general.</p>
<p><strong>Experience with Solana&#39;s Evolving Staking Landscape</strong></p>
<p>Experience with Solana&#39;s evolving staking landscape , liquid staking, stake pools, restaking protocols.</p>
<p><strong>Familiarity with Other PoS Protocol Staking Operations</strong></p>
<p>Familiarity with other PoS protocol staking operations (Ethereum, Cosmos ecosystem) for comparative perspective.</p>
<p><strong>Pay Transparency Notice</strong></p>
<p>Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Solana, Rust, Go, Distributed Systems, Cloud Infrastructure, Networking, Observability, Validator Operations, Staking Integrations, Protocol Evolution, Cross-Product Integration, Technical Leadership, Strategic Vision, Competitive Advantage, Business Goals, Executable Plans, Milestones, Alignment, Multiple Teams, Functions, Passion for Solana, Generative AI Tools, Copilots, Human-in-the-Loop Practices, Efficiency, Cost, Quality</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider that operates globally.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7684298</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>68b097b8-6dc</externalid>
      <Title>Senior Product Engineer</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>At Intercom, you will be a product engineer - someone who solves real customer problems through a smart and efficient application of your technical knowledge and your tools. You’ll be part of one of our multidisciplinary product teams, where you will build both back-end and front-end systems, and work closely with designers, product managers, researchers, and data analysts.</p>
<p>We’re facing many exciting scaling challenges and we’re building a robust platform where your expertise can be applied to areas such as building a beautiful messenger composer, rule matching, deliverability, security, app availability and machine learning, to name a few.</p>
<p>Responsibilities:</p>
<p>As an experienced engineer you will:</p>
<ul>
<li>Develop technical plans and contribute to our technical architecture as we scale our products to serve tens of millions of people every day.</li>
</ul>
<ul>
<li>Write Ruby code, which knits together a lot of AWS, infrastructure, platform and SaaS technologies that form the core of Intercom’s backend infrastructure</li>
</ul>
<ul>
<li>Ship a change to production on your first day and a feature in your first week. That “day one” change is automatically deployed to production along with 100 other deployments (on average) each weekday.</li>
</ul>
<ul>
<li>Build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</li>
</ul>
<ul>
<li>Grow your team’s capacity by mentoring other engineers and interviewing candidates. This is a chance to be an integral part of building and growing a team.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of industry experience in a software engineering role, preferably building a SaaS product. You can demonstrate significant impact that your work has had on the product and/or the team.</li>
</ul>
<ul>
<li>Deep knowledge of a high-level programming language (for example, Ruby, Python, Javascript etc.)</li>
</ul>
<ul>
<li>Experience collaborating directly with product teams and designers, and a proven track record of delivering value to customers or users. Engineers at Intercom are pragmatists who work closely with others on cross-disciplinary teams</li>
</ul>
<ul>
<li>Experience with Distributed systems</li>
</ul>
<p>Benefits:</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
</ul>
<ul>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
</ul>
<ul>
<li>Regular compensation reviews - we reward great work</li>
</ul>
<ul>
<li>Pension scheme &amp; match up to 4%</li>
</ul>
<ul>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
</ul>
<ul>
<li>Flexible paid time off policy</li>
</ul>
<ul>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
</ul>
<ul>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</li>
</ul>
<ul>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, AWS, Infrastructure, Platform, SaaS technologies, Distributed systems, High-level programming language, Python, Javascript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6386428</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86363ae6-10f</externalid>
      <Title>Manager, Field Engineering - Strategic Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the largest, fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector</li>
</ul>
<p>What we look for:</p>
<ul>
<li>4+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning)</li>
</ul>
<ul>
<li>3+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $172,500-$237,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$172,500-$237,150 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, data analysis, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8458032002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>