<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>bee517db-e9c</externalid>
      <Title>DevOps Engineer (all genders)</Title>
      <Description><![CDATA[<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>
<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>
<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>
<li>Container Orchestration: Kubernetes with Helm</li>
<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>
<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>
<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>
<li>Scripting: Python, Go, Bash</li>
<li>Version Control: GitHub</li>
<li>Collaboration: Jira (Agile)</li>
<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a DevOps Engineer, you will be responsible for:</p>
<ul>
<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>
<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>
<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>
<li>Troubleshooting production issues and participating in capacity planning</li>
<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>
<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>
<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>
<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>
<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>
<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>
<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>
<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>
<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>
<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>
<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>
<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>
<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>
<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with Helm charts and Kubernetes package management</li>
<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>
<li>Experience with designing AWS services-based architectures is a plus</li>
<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>
<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>
<li>Exposure to cost optimization strategies for cloud infrastructure</li>
<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>
<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>
<li>Technology: Work in a modern tech environment</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud, Container Orchestration, Infrastructure as Code, Monitoring &amp; Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2595036</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77c41bb-0ad</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert, you will have direct experience in a wide range of security technologies, tools, and methodologies. The role is suited for an experienced Application Security engineer with proven understanding in enterprise security and AI security and will focus on building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm’s information and computer systems. Millennium is a complex and robust technical environment and securing the Firm from external and internal threats is a top priority.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</li>
<li>Conduct specialized threat modeling, red teaming, and risk assessments for AI/ML models (e.g., testing for prompt injection, model theft, and data poisoning).</li>
<li>Lead risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects.</li>
<li>Engage throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards.</li>
<li>Evangelize AppSec and AI security best practices through developer education, training materials, and outreach.</li>
<li>Design robust security architectures and integrate automated security testing (SAST/DAST/SCA) into CI/CD pipelines.</li>
<li>Partner with Technology, Trading, Legal, and Compliance to create policies and communicate technical risks to non-technical stakeholders.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field.</li>
<li>5+ years’ experience working as an Application Security Engineer, Software Engineer, or similar role.</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs.</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI.</li>
<li>Experience in working with global teams, collaborating on code and presentations.</li>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols.</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions.</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar.</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies.</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus.</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Automated security testing, CI/CD pipelines, Security architectures, Secure configuration principles, Cryptography fundamentals, Encryption protocols, SCM &amp; CI/CD technologies, Security scanning, Vulnerability management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets management, Password vault technologies, Secure programming, Infrastructure as Code tools, Web application security testing tools, Methodologies, Security frameworks, Standards, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a technology-focused organisation that provides infrastructure services to various businesses.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629927</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a75ea8b-5b4</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies, you will play a key role in building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The successful candidate will have a proven understanding in enterprise security and AI security and will focus on defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks</li>
<li>Conducting specialized threat modeling, red teaming, and risk assessments for AI/ML models</li>
<li>Leading risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects</li>
<li>Engaging throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards</li>
<li>Evangelizing AppSec and AI security best practices through developer education, training materials, and outreach</li>
</ul>
<p>Qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field</li>
<li>5+ years&#39; experience working as an Application Security Engineer, Software Engineer, or similar role</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI</li>
<li>Experience in working with global teams, collaborating on code and presentations</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous</li>
</ul>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Developer education, Training materials, Outreach, Common SCM &amp; CI/CD technologies, GitHub, Jenkins, Artifactory, Security Scanning, Vulnerability Management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets Management &amp; Password Vault technologies, Delinea Secret Server, Hashicorp Vault, Secure programming, Python, Java, C++, C#, Infrastructure as Code tools, CloudFormation, Terraform, Ansible, Web application security testing tools, Methodologies, Security frameworks, Standards, ISO 27001, NIST, OWASP, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a department within a larger organisation that focuses on providing and maintaining the underlying technology infrastructure.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629908</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>262aa1cb-01c</externalid>
      <Title>Head of Corporate Engineering</Title>
      <Description><![CDATA[<p>As Head of Corporate Engineering, you will be responsible for Enterprise engineering and operations globally. You will be responsible for building and managing a highly technical enterprise engineering team, developing first principled-based strategies, and enabling strong enterprise security.</p>
<p>Key responsibilities include engineering, securing and optimizing cloud infrastructure, Identity and Access Management, Endpoints, Collaboration tools, and ensuring compliance with SOX, PCI DSS, and FedRAMP compliance. The Head of Corporate Engineering will work closely with R&amp;D on managing engineering tools like Jira, Confluence, and GitHub, driving efficient adoption and integration.</p>
<p>Strong technical and influencing leadership principles coupled with the ability to manage a complex, scaling, and fast-moving enterprise environment are essential. This role reports directly to the Vice President, Infrastructure and Operations</p>
<p>Responsibilities:</p>
<p>In this influential role, you will be responsible for:</p>
<p>Securing the Enterprise: Working closely with Enterprise Security organization to harden and secure our cloud environments, secret management, collaboration tools, endpoints, SaaS environments, IAM tools, and more. Success measured in continuous improvement of our enterprise security hardening standards</p>
<p>Building and Scaling our Cloud Infrastructure: Your team will be responsible for establishing and implementing enterprise cloud infrastructure including establishing Infrastructure Provisioning, SRE services, 24/7 on-call support, Infra as Code, observability, and more. In addition, you will be responsible for managing cloud budgets, vendor management, and establishing cost optimization initiatives. Success is measured in increased developer velocity while securing &amp; scaling the cloud infrastructure</p>
<p>Engineering Tooling: Partner closely with R&amp;D teams to establish policies, configurations, run-books, SLAs, hardening, scalability and availability of engineering tools like Github, Jira, Atlassian, and more</p>
<p>Endpoint Engineering: Enable extreme automation for endpoint management with zero-touch deployment, observability (synthetic and real-time), provisioning/de-provisioning, and establishing standards / SLAs. Enforce security policies, configure &amp; manage security settings and ensure compliance across all endpoints and mobile devices. Success is measured in terms of end-user satisfaction and % of manual touch</p>
<p>Collaboration Management: Ensure we provide world class tools to our employees to be extremely productive and collaborative. This would include but not be limited to managing and scaling internal workplace products like Gmail, Slack, Atlassian, Moveworks, Glean, and more. Success is measured by user satisfaction</p>
<p>Identity &amp; Access Management: Manage the IAM team from IAM implementation, access standards enforcement, SLA management, and compliance to various standards like FedRAMP, IL5, PCI, and more. Included are both internal and external identity providers to be managed. Success is measured by compliance, Identity governance, and availability</p>
<p>Desired Success Outcomes</p>
<p>A high-performing enterprise engineering team capable of handling complex technical projects with agility and high quality</p>
<p>Well defined cloud strategy ensuring the stability, scalability, and security of cloud infrastructure. Overhaul of current processes and workflows to address inefficiencies and increase team velocity</p>
<p>Robust endpoint security with Implementation of comprehensive security measures for all endpoints, including Mac, Windows, and mobile devices</p>
<p>Deliver high-quality employee experience with productivity tools (Gmail, Slack, Atlassian tools, Moveworks, GitHub) with a robust forward-looking roadmap</p>
<p>Efficient operational support for Tier 3 IT services with minimized production incidents. Implementation of robust incident and change management processes with mature operational practice</p>
<p>Efficient and mature processes for system integrations related to Mergers and Acquisitions (M&amp;As), ensuring timely smooth transitions during M&amp;A integrations</p>
<p>Development and implementation of automation tools and frameworks, Identification of automation opportunities to reduce manual toil and improve accuracy</p>
<p>Qualifications:</p>
<p>10 years of experience managing Cloud infrastructure at large enterprises. Extensive experience managing public cloud implementations in AWS. Experience with GCP and Azure will be a plus</p>
<p>In-depth understanding of Cloud native technologies to lead and guide the team. Must have hands-on experience in troubleshooting and debugging issues in production environments</p>
<p>Working experience in managing DevOps/SRE practices OKRs (Objective and Key Results), Agile development, Infra-as-code, SRE (Site Reliability Engineering), DevOps measurement such as DORA KPIs,</p>
<p>In-depth understanding of each collaboration tool&#39;s features, functionalities, and configurations (e.g., Gmail for email, Slack for messaging). Ability to identify and integrate and optimize the use of various tools for seamless collaboration (e.g., connecting Jira with GitHub for Dev metrics)</p>
<p>Experience leading a team of senior professionals working asynchronously in a remote, distributed team. Strong communication skills, with clear verbal communication and written communication skills</p>
<p>Collaborative style: partners well with cross-functional teams to solve hard problems and to complete complex deliverables with quality and business outcomes</p>
<p>Provide mentorship and guidance to team members to ensure that their skills and knowledge are kept up-to-date</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $265,000-$364,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$265,000-$364,300 USD</Salaryrange>
      <Skills>Cloud infrastructure, Identity and Access Management, Endpoint security, Collaboration tools, DevOps, Site Reliability Engineering, Agile development, Infrastructure as Code, Observability, Automation, Scripting languages, Cloud native technologies, Public cloud implementations, AWS, GCP, Azure, Jira, Confluence, GitHub, Atlassian, Moveworks, Glean, Slack, Gmail, Microsoft Office</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7293607002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5717691a-508</externalid>
      <Title>Staff Infrastructure Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are looking for a Staff Infrastructure Software Engineer to act as a primary technical lead, engineering the &#39;paved road&#39; for our knowledge retrieval and inference engines. You will define the deployment standards for Agentic workflows at scale, bridging the gap between complex AI orchestration and world-class infrastructure.</p>
<p>The ideal candidate thrives in a fast-paced environment, has a passion for both deep technical work and mentoring, and is capable of setting a long-term technical strategy for a critical domain while maintaining a strong, hands-on delivery focus.</p>
<p>You will architect and implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers.</li>
<li>Using our own data and AI platform to analyse build and test logs and metrics to identify areas for improvement.</li>
<li>Defining the architectural patterns for our multi-cloud infrastructure to support secure, reliable, and scalable Agentic workflows for enterprise customers.</li>
<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>
<li>Designing and championing highly scalable, reliable, and low-latency infrastructure and frameworks for building, orchestrating, and evaluating multi-agent systems at enterprise scale.</li>
<li>Leading the infrastructure roadmap with a strong focus on compliance, privacy, and security standards, including designing change management and data isolation strategies.</li>
<li>Owning the development and maintenance of our best-in-class Agentic observability platform (logging, metrics, tracing, and analytics) to proactively ensure system health and enable rapid incident response.</li>
<li>Driving developer efficiency by building automated tooling and championing Infrastructure-as-Code (IaC) paradigms throughout the engineering organization to improve workflows and operational efficiency.</li>
</ul>
<p>The ideal candidate has proven experience in a senior role, with 5+ years of full-time software engineering experience, and a deep understanding of modern infrastructure practices, including CI/CD, IaC (e.g., Terraform, Helm Charts), container orchestration (e.g., Kubernetes) and observability platforms (e.g., Datadog, Prometheus, Grafana).</p>
<p>Extensive experience with at least one major cloud provider (AWS, Azure, or GCP) and strong knowledge of security and compliance in enterprise environments, with a focus on access management, data isolation, and customer-specific VPC setups is required.</p>
<p>Proficiency in Python or JavaScript/TypeScript, and SQL is also necessary.</p>
<p>Bonus points for hands-on experience and a passion for working with Agents, LLMs, vector databases, and other emerging AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,200-$310,500 USD</Salaryrange>
      <Skills>Cloud computing, Infrastructure as Code, Container orchestration, Observability platforms, Security and compliance, Access management, Data isolation, Customer-specific VPC setups, Python, JavaScript/TypeScript, SQL, Agents, LLMs, Vector databases, Emerging AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599700005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0373d52-7fe</externalid>
      <Title>Senior IAM Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior IAM Engineer to join our team. As a Senior IAM Engineer, you will play a critical role in securing our systems and data. You will have the opportunity to work with cutting-edge IAM technologies, collaborate with cross-functional teams, and influence the development of our IAM strategy.</p>
<p>Your primary focus will be on designing and implementing identity lifecycle management, integration and orchestration, access governance, security and compliance, custom tooling, and data and AI infrastructure support. You will also be responsible for collaborating with cross-functional teams, improving provisioning and deprovisioning processes, integrating and managing IdPs within the IAM system, handling and streamlining access requests, developing and implementing IAM policies and procedures, and responding to ad-hoc requests.</p>
<p>To be successful in this role, you will need to have a strong understanding of identity lifecycle management, directory services, SSO, MFA, SCIM provisioning, and federation (SAML, OIDC, OAuth). You will also need to have experience partnering with HR, Finance, Compliance, and other cross-functional teams to design and implement IAM and enterprise solutions.</p>
<p>Additional skills and experience we&#39;d prioritize include experience with Workato or similar integration orchestrator tools, experience with Okta Workflows, certifications such as Workato or Okta Certified Professional/Administrator/Consultant, experience integrating IAM with HR systems, knowledge of compliance requirements related to IAM, and background in cloud platforms (AWS, GCP, Azure) and IAM integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scripting, Automation Mindset, APIs, Infrastructure as Code, Security Mindset, Identity and Access Management, Okta, Workday, Google Workspace, SCIM provisioning, Federation (SAML, OIDC, OAuth), Directory services, SSO, MFA, Workato, Okta Workflows, Certifications (Workato or Okta Certified Professional/Administrator/Consultant), Experience integrating IAM with HR systems, Knowledge of compliance requirements related to IAM, Background in cloud platforms (AWS, GCP, Azure) and IAM integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8393728002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8447826b-717</externalid>
      <Title>Senior Systems Integration Engineer</Title>
      <Description><![CDATA[<p>EarnIn is scaling its systems, automations, and data capabilities to power its people and protect its information. As a Senior Systems Integration Engineer, you will be a hands-on technical lead focused on Python-driven automation, building systems integrations between HRIS, Identity Provider, SaaS, and Finance Platform, and transforming operational data into actionable insights and dashboards.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, build, and maintain production-grade automations and internal tools in Python to eliminate manual work across identity, endpoint, and SaaS operations.</li>
<li>Develop resilient API integrations and event-driven workflows (webhooks, queues) with robust error handling, retries, and observability; package reusable libraries and CLIs that standardize how IT automates.</li>
<li>Codify repeatable infrastructure with Terraform; manage changes via Git and CI/CD (e.g., GitHub Actions).</li>
</ul>
<ul>
<li>Build and operate integrations between HRIS/IdP/SaaS and financial platforms (e.g., NetSuite, Carta, Expensify), ensuring data quality, lineage, and reconciliation across systems.</li>
<li>Create and maintain lightweight services that normalize and enrich data flows to power business intelligence and compliance reporting (Tableau/Power BI/Looker Studio).</li>
</ul>
<ul>
<li>Define KPIs/SLIs/SLOs for core IT services (availability, compliance, MTTR, deflection, time-to-productive-employee) and implement monitoring/alerting.</li>
<li>Build data warehouses (e.g., Databricks) and write SQL against them (e.g., BigQuery) and build self-serve dashboards for IT, Security, Finance, People Ops, and Engineering; instrument pipelines for accuracy and freshness.</li>
</ul>
<ul>
<li>Deliver repeatable, audit-ready evidence for controls via dashboards and scheduled reports.</li>
</ul>
<ul>
<li>Evaluate and deploy AI tools with guardrails to boost IT productivity; automate helpdesk workflows (triage, summarization, routing, knowledge search).</li>
<li>Define and track value metrics (adoption, deflection, CSAT, MTTR, time saved); iterate based on experiments and user feedback.</li>
</ul>
<ul>
<li>Implement and sustain controls mapped to SOC 2 and PCI (as applicable) with repeatable evidence collection.</li>
<li>Define and review SLIs/SLOs; add monitoring/alerting, config drift detection, and incident runbooks.</li>
</ul>
<ul>
<li>Lead cross-functional projects with Security, People Ops, Finance, and Engineering , from design through steady state.</li>
<li>Mentor junior engineers through design and code reviews; publish clear documentation that makes the reliable path the easy path.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, API/OpenAPI, event-driven workflows, SQL, Infrastructure as Code (Terraform), Git-based change management, security mindset</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, providing financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7703637</Applyto>
      <Location>Remote, Mexico</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eff95313-cdc</externalid>
      <Title>Senior Site Reliability Engineer</Title>
      <Description><![CDATA[<p>The Senior Site Reliability Engineer will play a key role in developing scalable, reliable, and efficient infrastructure that powers the entire company. This includes building and scaling internal platform offerings, designing and implementing monitoring, alerting, and incident response systems, and collaborating with application software engineers to guide their design and ensure it scales for what Carta needs in the long run.</p>
<p>The ideal candidate will have extensive experience with cloud services such as AWS, Google Cloud Platform, or Azure, including services like EC2, S3, RDS, and Lambda. They will also be proficient in using tools such as Terraform, Ansible, or CloudFormation for managing and provisioning cloud infrastructure.</p>
<p>The team is responsible for providing secure, reliable, scalable, and performant infrastructure to Carta&#39;s customers and developers. The successful candidate will be a strong communicator who enjoys collaborating to solve complex problems and has familiarity with infrastructure best practices on performance, reliability, and security and their associated tools.</p>
<p>Our stack is Python, Java, Terraform, gRPC, Docker, Kubernetes, Postgres, running on AWS. Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,688 - $225,000</Salaryrange>
      <Skills>Cloud Platforms, Infrastructure as Code (IaC), Networking, Monitoring and Observability, Software Development, API Services, AI Fluency, Experience operating CI/CD and its associated best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta provides software for venture capital, private equity, and private credit, supporting over 9,000 funds and SPVs with assets under management of nearly $185 billion.</Employerdescription>
      <Employerwebsite>https://carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7688689003</Applyto>
      <Location>San Francisco, California; Santa Clara, California; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e425bff-319</externalid>
      <Title>Software Engineer, Developer Experience</Title>
      <Description><![CDATA[<p>We&#39;re growing our team of engineers and are looking for a technical leader to help shape our build and CI infrastructure. As a member of the Build Systems team, you&#39;ll drive technical roadmap and strategy, partner with cross-functional teams, and lead complex initiatives reducing build/test times and improving CI reliability.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Driving technical roadmap and strategy for the Build Systems team</li>
<li>Partnering with cross-functional teams and leadership to identify developer pain points and design elegant, scalable solutions</li>
<li>Leading complex, multi-quarter initiatives reducing build/test times and improving CI reliability</li>
<li>Designing, building, and maintaining modern developer tools including scalable build systems, distributed CI platforms, and test frameworks</li>
<li>Architecting and implementing large-scale infrastructure on AWS that powers our entire build pipeline to ensure reliability, performance, and cost-efficiency</li>
<li>Mentoring and upleveling the team through technical guidance, code reviews, and architectural leadership</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of engineering experience, including 4+ years on Infrastructure, Platform, or Developer Experience teams. You should have deep expertise in build systems (e.g., Bazel, Buck, Pants, Gradle) and operating large-scale monorepos. Experience optimizing CI/CD, building developer tooling at scale, and designing reliable, high-performance distributed systems is also required.</p>
<p>Preferred skills include experience migrating large codebases to modern build systems (especially Bazel), building test frameworks and improving test infrastructure, and having excellent problem-solving skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$185,000-$376,000 USD</Salaryrange>
      <Skills>build systems, Bazel, Buck, Pants, Gradle, CI/CD, developer tooling, distributed systems, AWS, infrastructure as code, Go, TypeScript, Python, Ruby, migrating large codebases to modern build systems, building test frameworks, improving test infrastructure, problem-solving skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma develops a platform for design and collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5790627004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0796e182-42e</externalid>
      <Title>Sr. Software Engineer, Backend (Search Platform)</Title>
      <Description><![CDATA[<p>About Dialpad</p>
<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>
<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>
<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyze conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time.</p>
<p>Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do.</p>
<p>Visit dialpad.com to learn more.</p>
<p>Being a Dialer</p>
<p>At Dialpad, AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>
<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>
<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>
<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>
<p>Your role</p>
<p>Dialpad’s Product Engineering organization is responsible for building and maintaining the customer-facing features at scale across all of our cloud-native products and services.</p>
<p>Every day, millions of users across the world leverage our technology for communicating effectively and efficiently.</p>
<p>Every engineer on our global engineering team is given the opportunity to take ownership of a large portion of the product where they’re able to see immediate results.</p>
<p>Combining natural language processing and artificial intelligence with world-class cloud computing, the things you’ll create at Dialpad will shape the future of work,enabling companies to work from anywhere and making business communication more human.</p>
<p>Dialpad’s Analytics team owns data pipelines, multiple databases, a modular query layer, and rich FE components to deliver intuitive and powerful end-user-facing analytics experiences that allow Dialpad customers to make data-driven business decisions.</p>
<p>Our teams are highly collaborative and comprise cross-disciplinary professionals, including Product Managers, Designers, QA specialists, as well as Engineers specialising in Data Engineering, Data Science, and Telephony.</p>
<p>This position reports to the Engineering Manager, who is based in Bengaluru, and the role will be based in our Bengaluru, India Office.</p>
<p>The position will require a hybrid working arrangement based out of our Bengaluru office.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Contribute to the design, development, and maintenance of information retrieval and distributed systems.</li>
</ul>
<ul>
<li>Build and optimize search engines, including indexers, analyzers, ranking, and re-ranking strategies.</li>
</ul>
<ul>
<li>Work on hybrid search techniques, including dense vector manipulation, rank fusion, and reranking.</li>
</ul>
<ul>
<li>Maintain and enhance highly scalable search platforms with a focus on performance and cost efficiency.</li>
</ul>
<ul>
<li>Ensure high availability, reliability, and fault tolerance in search services.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Develop and optimize real-time distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Implement and maintain monitoring, alerting, and performance metrics for platform reliability.</li>
</ul>
<ul>
<li>Evaluate and integrate emerging technologies to improve search capabilities.</li>
</ul>
<ul>
<li>Write clean, modular, and well-tested code while following best engineering practices.</li>
</ul>
<ul>
<li>Participate in code reviews to ensure quality, maintainability, and scalability.</li>
</ul>
<ul>
<li>Provide mentorship and technical guidance to junior engineers.</li>
</ul>
<p><strong>Skills you’ll bring</strong></p>
<ul>
<li>4-7 years of experience in information retrieval or distributed systems engineering.</li>
</ul>
<ul>
<li>Strong understanding of search platforms and experience maintaining search engines at scale.</li>
</ul>
<ul>
<li>Deep knowledge of indexers, analyzers, field mapping, and ranking techniques.</li>
</ul>
<ul>
<li>Experience with NLP/NLU within the context of information retrieval.</li>
</ul>
<ul>
<li>Expertise in dense vector manipulation and optimization.</li>
</ul>
<ul>
<li>Familiarity with hybrid search, rank fusion, and reranking techniques.</li>
</ul>
<ul>
<li>Proficiency in Go and Python 3 (experience with Rust or TypeScript is a plus).</li>
</ul>
<ul>
<li>Strong understanding of distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Passion for real-time performance optimization and high availability.</li>
</ul>
<ul>
<li>Experience with API design using Swagger, OpenAPI, or equivalent tools.</li>
</ul>
<ul>
<li>Knowledge of gRPC or equivalent RPC protocols.</li>
</ul>
<ul>
<li>Experience with Docker and Kubernetes for containerized deployments.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (GCP preferred, AWS/Azure optional).</li>
</ul>
<ul>
<li>Hands-on experience with Infrastructure as Code tools like Terraform or Ansible.</li>
</ul>
<ul>
<li>Knowledge of CI/CD frameworks and continuous delivery practices.</li>
</ul>
<p>Why Join Dialpad</p>
<ul>
<li>Work at the center of the AI transformation in business communications</li>
</ul>
<ul>
<li>Build and ship agentic AI products that are redefining how companies operate</li>
</ul>
<ul>
<li>Join a team where AI amplifies every employee’s impact</li>
</ul>
<ul>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>
<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>
<p>Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>
<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>
<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>information retrieval, distributed systems engineering, search platforms, indexers, analyzers, field mapping, ranking techniques, NLP/NLU, dense vector manipulation, optimization, hybrid search, rank fusion, reranking, Go, Python 3, API design, gRPC, Docker, Kubernetes, cloud platforms, Infrastructure as Code, CI/CD frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8340906002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c0df50e1-9cd</externalid>
      <Title>Consultant, Developer Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Cloud Engineer for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>
</ul>
<ul>
<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>
</ul>
<ul>
<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>
</ul>
<ul>
<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>
</ul>
<ul>
<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>
</ul>
<ul>
<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>
</ul>
<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>
</ul>
<ul>
<li>Demonstrated experience with:</li>
</ul>
<ul>
<li>Developing serverless code in a CI/CD pipeline using an Agile methodology.</li>
</ul>
<ul>
<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP.</li>
</ul>
<ul>
<li>Scripting languages.</li>
</ul>
<ul>
<li>A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills.</li>
</ul>
<ul>
<li>Infrastructure as code tools like Terraform.</li>
</ul>
<ul>
<li>Strong experience with APIs.</li>
</ul>
<ul>
<li>CI/CD pipelines using Azure DevOps or Git.</li>
</ul>
<ul>
<li>Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc.</li>
</ul>
<ul>
<li>Good understanding and knowledge of:</li>
</ul>
<ul>
<li>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs.</li>
</ul>
<ul>
<li>Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP.</li>
</ul>
<ul>
<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>You have worked with a Cybersecurity company or products and have performed migrations using migration tools.</li>
</ul>
<ul>
<li>You have developed application security and performance capabilities.</li>
</ul>
<ul>
<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</li>
</ul>
<ul>
<li>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3, You have worked with a Cybersecurity company or products and have performed migrations using migration tools, You have developed application security and performance capabilities, Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty, The work will be performed in English. Fluency in a second regional European language is a strong advantage</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare provides a network that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7383015</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bb321e04-e73</externalid>
      <Title>Senior Full Stack Engineer - Team Web</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Full Stack Engineer to join Team Web, who is passionate about crafting intuitive front-end experiences and building the backend systems and tools that power them. You&#39;ll play a key role in shaping the future of our website across the full stack, from UI to infrastructure, while collaborating with product marketers, designers, and engineers across the business.</p>
<p>As a Senior Full Stack Engineer, you&#39;ll design, build, and maintain end-to-end web solutions , from modern UIs to backend services, APIs, and infrastructure. You&#39;ll collaborate with design, brand, marketing, and content teams to deliver seamless, performant experiences across web and mobile. You&#39;ll develop backend logic and APIs, manage data flows, and implement systems that integrate with third-party platforms.</p>
<p>You&#39;ll optimize website performance by applying best practices in front-end development, including lazy loading, and efficient asset management. You&#39;ll set up and manage infrastructure using tools like Vercel, AWS, Cloudfront, Terraform, and CI/CD pipelines (e.g., CircleCI). You&#39;ll implement and maintain web analytics, and support A/B testing for data-driven decisions.</p>
<p>You&#39;ll stay current with emerging technologies and trends to continually improve our development processes and user experience. You&#39;ll be comfortable writing backend software. We look for engineers to be able to unblock themselves end to end.</p>
<p>You&#39;ll build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, HTML, CSS, React, Next.js, Tailwind, CMS platforms (Contentful and Sanity), marketing tools (Google Tag Manager, Marketo), CI/CD tools (CircleCI), infrastructure as code tools (Terraform), cloud platforms (AWS, Vercel, CloudFront, S3), A/B testing, analytics tools, performance optimization techniques, best practices for fast-loading, responsive websites, testing frameworks (Jest, Mocha, Cypress)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer service solutions to businesses. It was founded in 2011 and has nearly 30,000 global businesses as clients.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7276257</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bec4e006-74f</externalid>
      <Title>Consultant, Developer Platform</Title>
      <Description><![CDATA[<p>About the role: Cloudflare provides advisory and hands-on-keyboard implementation and migration services for enterprise customers. As a Consultant for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>
<p>You are an expert in Developer Platform products or equivalent and will focus on building and deploying serverless applications with scale, performance, security and reliability leveraging: Workers, Workers KV, Workers AI, D1, R2, Images, and many other products.</p>
<p>This position has working hours Monday to Friday 09:00 a.m. to 06:00 p.m. Occasionally, we support our customers during the weekends for specific changes that need to be done outside of their business hours. Travel is expected to be around 40%.</p>
<p>Experience might include a combination of the skills below:</p>
<ul>
<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>
<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>
<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>
<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>
<li>Proven experience with Cloudflare or similar with Workers, Javascript/Typescript and Workers APIs.</li>
<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>
<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>
</ul>
<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>
<p>Experience might include a combination of the skills below:</p>
<ul>
<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>
<li>Demonstrated experience with:</li>
</ul>
<p>Developing serverless code in a CI/CD pipeline using an Agile methodology. Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP Scripting languages A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills. Infrastructure as code tools like Terraform. Strong experience with APIs. CI/CD pipelines using Azure DevOps or Git. Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc. Good understanding and knowledge of:</p>
<p>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs. Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP. Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</p>
<p>Strong advantage if:</p>
<p>You have worked with a Cybersecurity company or products and have performed migrations using migration tools. You have developed application security and performance capabilities. Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</p>
<p>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare provides internet infrastructure and security services to protect and accelerate online applications. It operates one of the world&apos;s largest networks, powering millions of websites and other internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7383013</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d50772ab-afe</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p>We are seeking a Staff / Senior Software Engineer to join our Cloud Inference team. The successful candidate will design and build infrastructure that serves Claude across multiple cloud service providers (CSPs), accounting for differences in compute hardware, networking, APIs, and operational models.</p>
<p>The ideal candidate will have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users. They will also have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
</ul>
<ul>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
</ul>
<ul>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
</ul>
<ul>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
</ul>
<ul>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
</ul>
<ul>
<li>Optimise inference cost and performance across providers,designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
</ul>
<ul>
<li>Contribute to inference features that must work consistently across all platforms</li>
</ul>
<ul>
<li>Analyse observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
</ul>
<ul>
<li>Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
</ul>
<ul>
<li>Strong interest in inference</li>
</ul>
<ul>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
</ul>
<ul>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
</ul>
<ul>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
</ul>
<ul>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p>Preferred skills:</p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
</ul>
<ul>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
</ul>
<ul>
<li>Hands-on experience with capacity management, cost optimisation, or resource planning at scale across heterogeneous environments</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation, batching, caching, and serving strategies</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p>Salary Range: $300,000-$485,000 USD</p>
<p>Experience Level: Staff</p>
<p>Employment Type: Full-time</p>
<p>Workplace Type: Hybrid</p>
<p>Category: Engineering</p>
<p>Industry: Technology</p>
<p>Required Skills:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
</ul>
<ul>
<li>Cloud computing (AWS, GCP, Azure)</li>
</ul>
<ul>
<li>Kubernetes</li>
</ul>
<ul>
<li>Infrastructure as Code</li>
</ul>
<ul>
<li>Container orchestration</li>
</ul>
<ul>
<li>Inference</li>
</ul>
<ul>
<li>Cross-functional collaboration</li>
</ul>
<ul>
<li>Autonomy and self-driven</li>
</ul>
<ul>
<li>Platform-agnostic tooling</li>
</ul>
<ul>
<li>Capacity management</li>
</ul>
<ul>
<li>Cost optimisation</li>
</ul>
<ul>
<li>Resource planning</li>
</ul>
<ul>
<li>LLM inference optimisation</li>
</ul>
<ul>
<li>Machine learning infrastructure</li>
</ul>
<ul>
<li>CI/CD systems</li>
</ul>
<ul>
<li>Multi-region deployments</li>
</ul>
<ul>
<li>Geographic routing</li>
</ul>
<ul>
<li>Global traffic management</li>
</ul>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Rust</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Direct experience working with CSP partner teams</li>
</ul>
<ul>
<li>Building platform-agnostic tooling</li>
</ul>
<ul>
<li>Hands-on experience with capacity management</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$485,000 USD</Salaryrange>
      <Skills>high-performance, large-scale distributed systems, cloud computing (AWS, GCP, Azure), kubernetes, infrastructure as code, container orchestration, inference, cross-functional collaboration, autonomy and self-driven, platform-agnostic tooling, capacity management, cost optimisation, resource planning, llm inference optimisation, machine learning infrastructure, ci/cd systems, multi-region deployments, geographic routing, global traffic management, python, rust, direct experience working with csp partner teams, building platform-agnostic tooling, hands-on experience with capacity management, strong familiarity with llm inference optimisation, experience with machine learning infrastructure, background designing and building ci/cd systems, solid understanding of multi-region deployments, proficiency in python or rust</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f80914c-588</externalid>
      <Title>Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About Role</p>
<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>
<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>
<ul>
<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>
</ul>
<ul>
<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>
</ul>
<ul>
<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>
</ul>
<ul>
<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>
</ul>
<ul>
<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>
</ul>
<ul>
<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>
</ul>
<ul>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
</ul>
<ul>
<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>
</ul>
<ul>
<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>
</ul>
<p><strong>Key Qualifications</strong></p>
<ul>
<li>3+ years of experience working in software development covering distributed systems and databases.</li>
</ul>
<ul>
<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>
</ul>
<ul>
<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>
</ul>
<ul>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>
</ul>
<ul>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>
</ul>
<ul>
<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>
</ul>
<ul>
<li>Experience with ClickHouse is a plus.</li>
</ul>
<ul>
<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>
</ul>
<ul>
<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>
</ul>
<ul>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>
<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>
<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a global network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7267602</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ab2590-2b4</externalid>
      <Title>Staff Security Engineer, Network Security</Title>
      <Description><![CDATA[<p>We are seeking a Staff Network Security Engineer to architect the defense of our global backbone, edge, and massive-scale GPU clusters. You will move beyond configuring firewalls to engineering security into the network fabric itself,utilizing telemetry, automation, and deep protocol analysis.</p>
<p>As a Staff Network Security Engineer, you will:</p>
<p>Unravel and tackle network security challenges at an exhilarating global scale. Collaborate with exceptional network architects and engineers building the backbone infrastructure for the AI revolution. Enjoy the freedom and support to experiment, innovate, and significantly shape our approach to securing the underlay and overlay of our cloud.</p>
<p>In this role, you will: Conducting architecture reviews, protocol analysis, and design assessments to proactively identify and fix vulnerabilities in our backbone and data center fabrics. Developing robust, repeatable frameworks for network security automation (CoPP, ACL generation, Route Filtering) that make it easy for teams to build securely from day one. Collaborating closely with Network Engineering teams to integrate security checks and validation seamlessly into their CI/CD and config-push pipelines. Crafting clear, practical security guidance and documentation that empowers engineers to deploy secure routing policies and topologies. Actively participating in architectural discussions regarding peering, transit, and traffic engineering, providing insightful security recommendations. Occasionally, &#39;drawing the owl&#39; - figuring out innovative solutions for securing massive throughput environments while navigating ambiguous situations.</p>
<p>You will be working with a talented team of network engineers, security experts, and AI researchers to build and deploy a highly scalable and secure cloud infrastructure.</p>
<p>If you are passionate about network security, cloud computing, and AI, and enjoy working in a fast-paced, dynamic environment, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>core network protocols (BGP, OSPF/IS-IS, TCP/IP), deep knowledge of how they function at the packet level, network automation or security tooling in Go, Python, or similar modern languages, collaborating with network architects to implement secure designs in multi-vendor environments, Linux networking internals, control plane protection, and managing infrastructure as code, hyperscale network architectures (CLOS fabrics, MPLS/EVPN, VXLAN), hardware-level networking security (SmartNICs/DPUs, connectX), flow-based telemetry analysis, internet routing security standards (RPKI, MANRS), advanced DDoS mitigation strategies at the network layer, Infiniband and RoCE</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4620164006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c0569537-539</externalid>
      <Title>Staff Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Staff Engineer on the GitLab Delivery - Upgrades team, you&#39;ll guide the technical direction for GitLab&#39;s self-managed deployment strategy so customers can deploy, upgrade, and run GitLab reliably in their own infrastructure with minimal disruption.</p>
<p>You&#39;ll serve as a technical anchor for the team, working closely with your engineering manager, product manager, and partners across Site Reliability Engineering, Release, Security, and Development to shape cloud-native, operator-driven deployment patterns that reduce operational complexity and upgrade friction.</p>
<p>In your first year, you&#39;ll help define the architecture for zero-downtime upgrades, strengthen observability and reliability practices, and guide the next generation of deployment automation for self-managed GitLab environments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolving GitLab Operator and Helm charts to support zero-downtime upgrades for complex, stateful GitLab installations</li>
</ul>
<ul>
<li>Advancing the GitLab Environment Toolkit to simplify large-scale, production-ready self-managed deployments</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Guide the technical vision and architecture for GitLab&#39;s cloud-native, self-managed deployments and upgrade workflows.</li>
</ul>
<ul>
<li>Establish operational maturity standards, service integration patterns, and deployment models that help development teams manage the lifecycle of their components.</li>
</ul>
<ul>
<li>Design and maintain Kubernetes Operators, Helm charts, and upgrade orchestration tooling for self-managed GitLab deployments across varied environments.</li>
</ul>
<ul>
<li>Develop automation and integration frameworks for database migrations, rolling deployments, compatibility checks, and rollback paths.</li>
</ul>
<ul>
<li>Define database and application lifecycle strategies, including safe PostgreSQL migration approaches and validation mechanisms that reduce downtime risk.</li>
</ul>
<ul>
<li>Work with Product Management, GitLab.com Site Reliability Engineering, GitLab Dedicated, and development teams to align deployment patterns with customer needs.</li>
</ul>
<ul>
<li>Mentor engineers and enable customer-facing teams through design reviews, code reviews, documentation, and runbooks.</li>
</ul>
<ul>
<li>Drive observability, testing, performance, and resilience practices for self-managed deployments, and contribute to incident response and post-incident learning.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong software engineering experience designing and delivering production systems that customers install and operate in their own infrastructure.</li>
</ul>
<ul>
<li>Proficiency in Go for large, complex codebases, with familiarity with Ruby on Rails and Rails application architecture as a useful addition.</li>
</ul>
<ul>
<li>Hands-on experience with Kubernetes in production, including building and maintaining Operators, designing Helm charts for stateful applications, and working with Custom Resource Definitions, admission controllers, and controller patterns.</li>
</ul>
<ul>
<li>Knowledge of cloud-native systems and tooling, such as service mesh, observability stacks, infrastructure as code, and automation tools like Terraform or Ansible.</li>
</ul>
<ul>
<li>Experience with stateful workloads and databases, including PostgreSQL schema design and migrations, persistent volumes, storage classes, and approaches for reducing downtime during upgrades.</li>
</ul>
<ul>
<li>Understanding of Linux systems and production operations, including package management, systemd, system-level debugging, observability, incident response, and on-call participation.</li>
</ul>
<ul>
<li>Ability to guide through influence, including writing clear technical proposals, documenting decisions, mentoring engineers, and working effectively across teams.</li>
</ul>
<ul>
<li>Interest in open source infrastructure or deployment tooling, or transferable experience from adjacent domains, with the ability to explain technical concepts clearly to different audiences.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Delivery - Upgrades team sits within GitLab Delivery and focuses on delivering GitLab to self-managed users through supported, validated deployment tooling. We own and evolve the GitLab Omnibus package, Helm charts, GitLab Operator, and the GitLab Environment Toolkit, and we work asynchronously across regions with partners in Site Reliability Engineering, Release, Security, and Development.</p>
<p>Our work centers on enabling zero-downtime upgrades, reducing operational complexity at scale, supporting GitLab’s cloud-native transition while continuing to serve existing deployments, and improving the upgrade experience for customers running GitLab in diverse environments.</p>
<p>For more on how we work, see [Link: Team Handbook Page].</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Ruby on Rails, Kubernetes, Cloud-native systems, Service mesh, Observability stacks, Infrastructure as code, Automation tools, Linux systems, Production operations, Package management, Systemd, System-level debugging, Incident response, On-call participation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. With over 50 million registered users and more than 50% of the Fortune 100 trusting GitLab, it is a large and established company.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463922002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd44a200-1ac</externalid>
      <Title>Director of Engineering (Service Foundations)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Service Foundations)</p>
<p>We are seeking a seasoned Director of Engineering to lead our Service Foundations team. As a key member of our executive engineering team, you will be responsible for building and operating distributed systems, driving company-wide efficiency, reliability, and automation.</p>
<p>In this role, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical infrastructure initiatives that integrate AI-driven tooling directly into the infrastructure itself to make it more adaptive, scalable, and intelligent.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Solve real business needs at a large scale by applying your software engineering expertise</li>
<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>
<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>
<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>
<li>Lead and participate in technical, product, and design discussions</li>
<li>Build, manage, and operate highly scalable services in the cloud</li>
<li>Grow leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partner with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Create a culture of excellence on the team while leading with empathy</li>
</ul>
<p>Requirements:</p>
<ul>
<li>20+ years of industry experience building and operating large-scale distributed systems</li>
<li>Proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads</li>
<li>Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions</li>
<li>Ability to translate requirements from internal engineering teams into clear priorities and execution plans</li>
<li>Fluent across the infrastructure stack , storage, orchestration, observability, and developer platforms , with intuition for how these layers interact</li>
<li>Ability to evaluate and evolve abstractions , knows when to unify, when to localise, and how to reduce cognitive load for product teams</li>
<li>BS in Computer Science (Masters or PhD preferred)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratise data, analytics, and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure systems, Distributed systems, Infrastructure as Code, Containerisation, Orchestration, Observability, Developer platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201768002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c69bbb7-4bb</externalid>
      <Title>Intermediate Site Reliability Engineer, Environment Automation</Title>
      <Description><![CDATA[<p>Join the Dedicated team as a Site Reliability Engineer focused on Environment Automation, where your work will help power hundreds of isolated GitLab environments for our customers.</p>
<p>In this role, you&#39;ll help keep these environments reliable, scalable, secure, and consistent by treating everything as code and contributing to automation across the entire lifecycle, from initial provisioning to day-to-day operations.</p>
<p>Instead of operating a single platform, you&#39;ll collaborate with senior SREs to solve the unique challenges of managing many tenant environments in parallel, each with its own constraints and integration points.</p>
<p>You&#39;ll help define, deploy, and maintain GitLab environments across cloud providers using infrastructure as code, deployment packages, and Kubernetes.</p>
<p>Some examples of work you&#39;ll do:</p>
<ul>
<li>Contribute to the design and evolution of infrastructure automation using Terraform, Ansible, and Kubernetes to provision, upgrade, and operate many GitLab environments with minimal manual effort</li>
</ul>
<ul>
<li>Help debug and resolve production issues across Kubernetes clusters, GitLab components, and cloud services, then assist in building automation and safeguards that prevent similar issues from recurring</li>
</ul>
<ul>
<li>Assist in creating and maintaining deployment and orchestration tools, such as Helm Charts, omnibus-gitlab configurations, and multi-tenant workflows, that make it easy for teams to manage GitLab environments at scale</li>
</ul>
<p>You&#39;ll contribute to automating operational tasks across many GitLab environments, from initial provisioning and configuration updates to upgrades and routine maintenance, helping reduce manual work and improve reliability at scale under the guidance of senior team members.</p>
<p>You&#39;ll help build and refine the observability stack for multi-tenant GitLab environments so we monitor the right signals across Kubernetes, cloud services, and GitLab applications, supporting early issue detection and basic capacity tracking.</p>
<p>You&#39;ll assist in responding to platform alerts and incidents, collaborating with Environment Automation SREs and engineering teams to troubleshoot production issues across multiple tenants and document findings.</p>
<p>You&#39;ll support planning and implementation of infrastructure changes, capacity expansions, and new service rollouts for Dedicated and other managed GitLab environments, contributing to efforts that improve resource efficiency and environment isolation.</p>
<p>You&#39;ll develop and maintain scripts, automation tools, and infrastructure-as-code workflows that manage parts of the GitLab environment lifecycle, enabling more repeatable, self-service operations over time.</p>
<p>You&#39;ll apply and help implement best practices for running GitLab on Kubernetes and cloud platforms, focusing on day-to-day reliability, performance, and security while learning how to keep environments consistent.</p>
<p>You&#39;ll participate in the on-call rotation for production GitLab environments with appropriate support, helping triage and mitigate incidents across clusters and cloud providers and contributing to post-incident reviews.</p>
<p>You&#39;ll document operational tasks, runbooks, and lessons learned so they become clear, repeatable processes and can be candidates for future automation, improving shared knowledge and reducing manual toil across the team.</p>
<p>Experience working as an SRE or in a similar role operating production infrastructure, with an interest in automating the lifecycle of many environments or tenants in parallel, even if you have not yet done so at large scale.</p>
<p>Hands-on experience with Golang (required) and the ability to read, understand, and modify infrastructure tools written in Go.</p>
<p>Hands-on experience running Kubernetes-based workloads in production, including basic understanding of deployments, rollouts, and debugging common issues like crash loops, failed health checks, and scheduling problems.</p>
<p>Familiarity with infrastructure automation and configuration management tools such as Terraform and Ansible, including experience working with modules, variables, and managing state safely for multiple environments.</p>
<p>Solid understanding of Git-based workflows and infrastructure-as-code practices, with the ability to contribute to reusable modules, templates, and pipelines that make automation safer and more consistent.</p>
<p>Experience working in distributed systems or cloud-based production environments, ideally in SaaS or managed service settings, with comfort participating in incident response and on-call rotations under guidance from more senior team members.</p>
<p>A proactive mindset focused on automation and documentation,you look for opportunities to remove manual steps, improve runbooks, and turn repetitive tasks into reliable, self-service tools.</p>
<p>Comfort working asynchronously across distributed teams and a desire to contribute to GitLab&#39;s values of collaboration, transparency, and iteration.</p>
<p>About the team:</p>
<p>We are responsible for building, running, and evolving the entire lifecycle of the GitLab environments that power the GitLab Dedicated platform.</p>
<p>You&#39;ll be part of our team focused on owning the reliability, scalability, performance, and security of automated single-tenant GitLab instances and their supporting services.</p>
<p>GitLab Dedicated provides fully managed, isolated environments for customers around the world, which means your work directly impacts how organizations of all sizes run their mission-critical software delivery on GitLab.</p>
<p>We operate in a fully distributed, asynchronous environment across multiple regions, collaborating on everything from infrastructure automation and environment lifecycle design to incident response and capacity planning.</p>
<p>You&#39;ll be solving novel challenges at scale, from orchestrating infrastructure-as-code workflows across hundreds of tenants to designing the automation that keeps those environments consistent, secure, and up to date.</p>
<p>We continuously seek to reduce complexity and improve efficiency by leveraging cloud vendor managed products</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Kubernetes, Terraform, Ansible, Infrastructure as Code, Automation, Scripting, Cloud computing, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8464417002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6556c9a6-357</externalid>
      <Title>Senior Professional Services, Technical Architect - AI</Title>
      <Description><![CDATA[<p>As a Senior Professional Services Technical Architect, AI at GitLab, you&#39;ll be an embedded expert who helps customers move from ideas to production. You&#39;ll work directly with customer teams as a consultative partner, running in-depth discovery to understand their environment and priorities, then designing and delivering solutions that connect business goals to architecture and implementation.</p>
<p>This is a deeply technical, customer-facing role where you&#39;ll build and deploy Custom Agents, Custom Flows, and CI/CD integrations. You&#39;ll own delivery end-to-end, from prototype through production support. You&#39;ll partner closely with Professional Services and Customer Success stakeholders, including Professional Services Engineers, Project Managers, Customer Success Managers, and Solution Architects.</p>
<p>Some examples of our projects include leading customer discovery and defining a prioritized GitLab Duo Agent Platform use case roadmap tied to clear success criteria, designing and delivering production-ready GitLab Duo Agent Platform implementations, building rapid prototypes to demonstrate the art of the possible with agentic AI, and integrating the GitLab Duo Agent Platform with customer systems and workflows using GitLab APIs, pipeline configuration, and infrastructure as code.</p>
<p>What you&#39;ll do:</p>
<p>Conduct deep customer discovery to understand business goals, technical constraints, and organizational dynamics, and translate them into clear problem statements and a prioritized use case plan for GitLab Duo Agent Platform.</p>
<p>Partner with customer stakeholders across engineering, security, compliance, and business teams to align on success criteria, milestones, and adoption strategy for AI workflows in production.</p>
<p>Design, build, and deploy production-ready GitLab Duo Agent Platform solutions, including Custom Agents, Custom Flows, and CI/CD integrations that map to validated customer use cases.</p>
<p>Embed with customer engineering teams to deliver hands-on implementations end-to-end, from prototype to production rollout, troubleshooting, and optimization.</p>
<p>Configure and integrate platform foundations such as runners, network access, runtime sandboxing, GitLab APIs (REST and GraphQL), and AI governance controls (for example, role-based access control and model policies) to meet enterprise requirements.</p>
<p>Measure and communicate impact using DORA (DevOps Research and Assessment) metrics, AI Impact Analytics, and Value Stream Analytics, and use those insights to guide iteration and expansion of successful use cases.</p>
<p>Codify repeatable deployment patterns, reusable assets, and lessons learned, contributing back to GitLab through documentation, accelerators, and product feedback informed by field experience.</p>
<p>Travel up to 50% for customer site engagements and company onsite events to support delivery, onboarding, and stakeholder alignment.</p>
<p>What you&#39;ll bring:</p>
<p>Demonstrated experience leading customer-facing technical engagements, from discovery through production rollout, with ownership of outcomes.</p>
<p>Proficiency in Python, with experience building and operating production-grade applications and integrations.</p>
<p>Experience delivering with GitLab CI/CD, including pipeline design, YAML configuration, and using GitLab APIs (REST and GraphQL).</p>
<p>Hands-on experience with infrastructure as code (for example, Terraform or Ansible) and deploying solutions into enterprise environments.</p>
<p>Working knowledge of large language model (LLM) capabilities and limitations, including prompt engineering and building agentic workflows (such as Custom Agents and Custom Flows).</p>
<p>Experience with Docker, container orchestration concepts, and runner configuration in secure environments.</p>
<p>Familiarity with DevSecOps practices, including security controls, access management, and compliance requirements that impact deployment design.</p>
<p>Strong written and verbal communication skills, with the ability to partner closely with customer stakeholders and translate business goals into technical plans in a remote, asynchronous environment.</p>
<p>About the team:</p>
<p>GitLab&#39;s Professional Services organization within Customer Success helps customers get value from the GitLab Duo Agent Platform. We&#39;re a remote, asynchronous team that works closely with customer-facing colleagues to support successful deployments. We focus on turning what we learn in the field into reusable assets, clearer documentation, and product feedback that helps improve GitLab Duo Agent Platform for future customers.</p>
<p>The base salary range for this role’s listed level is currently for residents of the United States only. This range is intended to reflect the role&#39;s base salary rate in locations throughout the US. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, alignment with market data, and geographic location. The base salary range does not include any bonuses, equity, or benefits. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary. United States Salary Range $164,880-$247,320 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,880-$247,320 USD</Salaryrange>
      <Skills>Python, GitLab CI/CD, Infrastructure as Code, Docker, Container Orchestration, DevSecOps, Large Language Model (LLM), Prompt Engineering, Agentic Workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8334735002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58d220e6-02a</externalid>
      <Title>Senior Site Reliability Engineer, Tenant Services: Geo</Title>
      <Description><![CDATA[<p>Job Title: Senior Site Reliability Engineer, Tenant Services: Geo</p>
<p>We are looking for a skilled Senior Site Reliability Engineer to join our Tenant Services, Geo team. As a Senior Site Reliability Engineer, you will be responsible for ensuring the smooth operation of our user-facing services and production systems.</p>
<p>About Us</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. It enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</p>
<p>Responsibilities</p>
<ul>
<li>Execute Dedicated Geo migrations and cutovers end-to-end, including planning, pre-cutover validation, execution, and post-cutover verification and cleanup.</li>
<li>Join the team&#39;s shift and weekend coverage rotation for Dedicated cutovers across EMEA and US hours, and participate in the SaaS Site Reliability Engineering (SRE) on-call rotation to respond to incidents that impact GitLab.com availability.</li>
<li>Operate and improve the Geo operational surface for Dedicated, including:</li>
<li>Environment preparation and data hygiene checks prior to migrations.</li>
<li>Execution of replication, validation, and cutover procedures.</li>
<li>Handling Geo-related escalations from Support and internal partners.</li>
<li>Design, build, and maintain automation, tooling, and runbooks that make migrations, cutovers, and Geo escalations as &#39;boring&#39; and repeatable as possible.</li>
<li>Run our infrastructure with tools such as Ansible, Chef, Terraform, GitLab CI/CD, and Kubernetes; contribute improvements back to GitLab&#39;s product and infrastructure where appropriate.</li>
<li>Build and maintain monitoring, alerting, and dashboards that:</li>
<li>Detect symptoms early, not just outages.</li>
<li>Track migration and cutover success rates, duration, rollback frequency, and related SLOs.</li>
<li>Collaborate closely with:</li>
<li>The core Geo team on improving Geo features and operability.</li>
<li>Dedicated migrations and Support on migration planning, customer communications, and escalation handling.</li>
<li>Other Infrastructure teams on capacity planning, disaster recovery, and reliability improvements.</li>
<li>Contribute to readiness reviews, incident reviews, and root cause analyses, turning learnings into changes in automation, process, or product.</li>
<li>Document every action, including runbooks, architecture decisions, and post-incident reviews, so your findings turn into repeatable practices and automation.</li>
<li>Proactively identify and reduce toil by automating repetitive operational work and simplifying migration workflows.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience operating highly-available distributed systems at scale, ideally in a SaaS environment with customer-facing SLAs.</li>
<li>Hands-on experience with at least one major cloud provider (e.g., Google Cloud Platform or Amazon Web Services), including networking, storage, and managed services.</li>
<li>Experience with Kubernetes and its ecosystem (e.g., Helm), including deploying and troubleshooting workloads.</li>
<li>Experience with infrastructure as code and configuration management tools such as Terraform, Ansible, or Chef.</li>
<li>Strong programming skills in at least one general-purpose language (preferably Go or Ruby) and proficiency with scripting (e.g., Shell, Python).</li>
<li>Experience with observability systems (e.g., Prometheus, Grafana, logging stacks) and using metrics and logs to troubleshoot performance and reliability issues.</li>
<li>Practical exposure to data replication, backup/restore, or migration scenarios (e.g., database replication, storage replication, or Geo-like technologies) where data integrity and downtime risk must be carefully managed.</li>
<li>Comfort participating in an on-call rotation, investigating incidents across the stack, and driving follow-through on corrective actions.</li>
<li>Ability to engage directly with enterprise customers during migrations and incidents, including on live calls and through clear written updates.</li>
<li>Ability to clearly define problems, propose options, and think beyond immediate fixes to improve systems and processes over time.</li>
<li>Ability to be a &#39;manager of one&#39;: self-directed, organized, and able to drive work to completion in a remote, asynchronous environment.</li>
<li>Strong written and verbal communication skills, with a bias toward clear, asynchronous documentation and collaboration.</li>
<li>Alignment with our company values and a commitment to working in accordance with those values.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience working with disaster recovery technologies.</li>
<li>Experience with managed/hosted environments similar to GitLab Dedicated, including regulated or compliance-sensitive customers (e.g., SOC2, ISO).</li>
<li>Prior work on large-scale data migrations or cutovers where customer data integrity, performance, and downtime risk had to be carefully balanced.</li>
<li>Hands-on experience designing and operating database replication, backup/restore, and cutover workflows (for example, PostgreSQL or cloud-managed equivalents such as AWS RDS), including planning and executing low-risk migrations for large datasets.</li>
<li>Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms.</li>
<li>Familiarity with GitLab (self-managed or SaaS), and/or contributions to open source projects.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience operating highly-available distributed systems at scale, Hands-on experience with at least one major cloud provider, Experience with Kubernetes and its ecosystem, Experience with infrastructure as code and configuration management tools, Strong programming skills in at least one general-purpose language, Experience working with disaster recovery technologies, Experience with managed/hosted environments similar to GitLab Dedicated, Prior work on large-scale data migrations or cutovers, Hands-on experience designing and operating database replication, backup/restore, and cutover workflows, Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform for DevSecOps. It has over 50 million registered users and over 50% of the Fortune 100 trust it to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8490453002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba0a936c-9b5</externalid>
      <Title>Partner Solution Architect (pre-sales)</Title>
      <Description><![CDATA[<p>We are looking for a Partner Solutions Architect to lead technical strategy and enablement for our ecosystem in the ANZ region. This is a hands-on builder role. You will be responsible for ensuring our partners are not only articulating Elastic&#39;s value but are technically capable of architecting, building, and validating complex solutions.</p>
<p>As a Partner Solutions Architect, you will:</p>
<ul>
<li>Own Technical Engagement Plans (TEPs) for focus partners, establishing long-term technical roadmaps at the CTO and Practice Lead level.</li>
<li>Guide partners through high-stakes Technical Validation cycles, ensuring Elastic solutions are built to best-practice standards.</li>
<li>Lead &#39;one-to-many&#39; technical &#39;Build-a-thons&#39; and hands-on laboratory sessions that empower partner engineers to lead their own implementations.</li>
<li>Build deep relationships with partner pre-sales teams to guide them through the &#39;how-to&#39; of complex Search AI, Observability, and Security architectures at the configuration level.</li>
<li>Collaborate on &#39;design wins&#39; by developing repeatable technical blueprints.</li>
</ul>
<p>To be successful in this role, you will require:</p>
<ul>
<li>Direct, hands-on experience with the Elastic Stack (ELK) or similar distributed search/analytics technologies (e.g., OpenSearch, Solr, Splunk, Datadog).</li>
<li>8+ years of experience in technical roles.</li>
<li>Proven ability to design and build technical prototypes, ingest complex datasets, and optimize search/indexing performance.</li>
<li>Hands-on experience with Kubernetes, Docker, and Infrastructure as Code (Terraform) on AWS, Azure, or GCP.</li>
<li>3+ years in a partner-facing role, with a focus on building technical practices and enabling third-party engineering teams.</li>
<li>The ability to translate deep technical capabilities into scalable partner-led solutions.</li>
</ul>
<p>If you are a motivated and experienced professional with a passion for technology and partnership development, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Elastic Stack (ELK), OpenSearch, Solr, Splunk, Datadog, Kubernetes, Docker, Infrastructure as Code (Terraform), AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Their platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7757097</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9d5360b-229</externalid>
      <Title>Staff Platform Engineer - Infra + DevOps</Title>
      <Description><![CDATA[<p>We&#39;re looking for a seasoned Platform Engineer to join our team. As a leader in aging care innovation, Honor provides technology, tools, and services that empower older adults to live life on their own terms. Our platform engineering team builds and manages the infrastructure &amp; core services that powers Honor&#39;s Care Platform. We&#39;re seeking someone with at least 6 years of professional experience in a platform engineering team within a product-centric company. You will be responsible for designing, implementing, and maintaining scalable distributed systems &amp; infrastructure. Your expertise should include cloud platforms, advanced software design patterns &amp; architecture, operations and automation, and containerization technologies like Kubernetes. You will be joining a small team of highly-skilled, enthusiastic, and passionate engineers with an opportunity to create an outsized impact in contributing to the future evolution of Honor&#39;s Care Platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement foundational patterns and libraries for Python applications, across a range of technologies from API services to event processing</li>
<li>Utilize Infrastructure as Code (IaC) tools to ensure reproducible and scalable environment setups</li>
<li>Design and implement infrastructure for applications hosted on AWS, supporting event-driven systems, containerized services on Kubernetes, and serverless functions</li>
<li>Develop and maintain robust CI/CD pipelines using tools such as Jenkins, ArgoCD</li>
<li>Have experience automating the lifecycle management of code from development through production, including code promotion and configuration management</li>
<li>Instrument observability through tools such as CloudWatch and DataDog to monitor and optimize application performance across multiple environments</li>
<li>Scale infrastructure to meet increasing demand while managing cost effectively</li>
<li>Have experience defining, instrumenting and measuring standards for quality, security, scalability, and availability with a focus on delivering business value</li>
<li>Have passion for delivering turn-key developer experience for local development</li>
<li>Keen interest in developing talent through mentorship</li>
<li>Strong written and verbal communication, tailored to a variety of audiences</li>
<li>A strategic thinker with a product-first approach and customer obsession</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 6 years of professional experience in a platform engineering team within a product-centric company</li>
<li>Experience working with an RPC architecture</li>
<li>Experience working at or having worked at a technology startup and familiar with the challenges of evolving platform maturity</li>
<li>First-hand experience navigating multiple distributed architecture patterns</li>
</ul>
<p>Our range reflects the hiring range for this position. We use national average to determine pay as we are a remote first company. Individual pay is based on a number of factors including qualifications, skills, experience, education, and training. Base pay is just a part of our total rewards program. Honor offers generous equity packages that increase with position level and responsibilities, and a 401K with up to a 4% employer match. We provide medical, dental and vision coverage including zero cost plans for employees. Short Term Disability, Long Term Disability and Life Insurance are fully employer paid with a voluntary additional Life Insurance option. We offer a generous time off program, mental health benefits, wellness program, and discount program.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700-$223,000 USD</Salaryrange>
      <Skills>cloud platforms, advanced software design patterns &amp; architecture, operations and automation, containerization technologies like Kubernetes, Infrastructure as Code (IaC), AWS, event-driven systems, serverless functions, CI/CD pipelines, Jenkins, ArgoCD, observability, CloudWatch, DataDog, quality, security, scalability, availability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults. Its portfolio includes Home Instead, Inc., the world&apos;s leading provider of in-home care.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8297124002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>594b20c4-c28</externalid>
      <Title>Infrastructure Engineer, Security</Title>
      <Description><![CDATA[<p>We&#39;re looking for an infrastructure engineer to own and evolve the security infrastructure that underpins our foundation models. In this role, you&#39;ll work across compute, storage, networking, and data platforms, making sure our systems are secure, reliable, and built to scale.</p>
<p>You&#39;ll shape controls, architecture, and tooling so that security is part of how the platform works by default. You&#39;ll partner closely with research and product teams, enabling them to move quickly while keeping our models, data, and environments protected.</p>
<p>Key responsibilities include:</p>
<p>Architecting security patterns for platforms and services, including network segmentation, service-to-service authentication, RBAC, and policy enforcement in Kubernetes and cloud environments.</p>
<p>Managing identity, access, and secrets for humans and services: workload and cross-cloud identity, least-privilege IAM, and secrets management.</p>
<p>Building secure platforms for data ingestion, processing, and curation: classification, encryption, access controls, and safe sharing patterns across teams.</p>
<p>Writing threat models and reviewing designs with researchers and engineers to help them ship features and experiments in a safe, scalable way.</p>
<p>Automating security checks and building guardrails: policy-as-code, secure infrastructure baselines, validation in CI/CD, and tools that make the secure path the easiest one.</p>
<p>Requirements include:</p>
<p>Bachelor&#39;s degree or equivalent experience in engineering, or similar.</p>
<p>Strong background with containers and orchestration (e.g., Kubernetes) and how to secure them (namespaces, network policies, pod security, admission controls, etc.).</p>
<p>Practical experience with Infrastructure as Code (Terraform or similar), including secure patterns for provisioning networks, IAM, and shared services.</p>
<p>Solid understanding of cloud networking and security: VPCs, load balancers, service discovery, mTLS, firewalls, and zero-trust-style architectures.</p>
<p>Proficiency with a systems language such as Rust and scripting in Python for building platform components and internal tools.</p>
<p>Evidence of owning complex, production-critical systems, including debugging issues that span infra, security, and application layers.</p>
<p>Preferred qualifications include experience with ML infrastructure, GPU clusters, or large-scale training environments, as well as background in AI labs, HPC environments, or ML-heavy organizations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$200,000 - $475,000 USD</Salaryrange>
      <Skills>Kubernetes, Infrastructure as Code, Cloud Networking and Security, Systems Language (Rust), Scripting (Python), ML Infrastructure, GPU Clusters, Large-Scale Training Environments, AI Labs, HPC Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachineslab.com.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.</Employerdescription>
      <Employerwebsite>https://thinkingmachineslab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5015964008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b5ce114e-dac</externalid>
      <Title>Cloud Engineer – Factory Systems and Operational Technology</Title>
      <Description><![CDATA[<p>Anduril Industries is a defence technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology and business model of the 21st century&#39;s most innovative companies to the defence industry, Anduril is changing how military systems are designed, built and sold.</p>
<p>The company&#39;s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a real-time, 3D command and control centre.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion and networking technology to the military in months, not years.</p>
<p>We are seeking a mission-driven Cloud Infrastructure Engineer to take a leading role in designing and implementing world-class defensive controls. This is a high-impact role with the autonomy to shape security architecture and protect the technology that is changing the future of defence.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and Own Security Architecture: Architect, build and deploy robust, scalable security controls for our corporate, development and production cloud environments (AWS, Azure, GCP).</li>
</ul>
<ul>
<li>Automate Everything: Develop and automate infrastructure-as-code (IaC) to manage and scale our cloud deployments securely and efficiently.</li>
</ul>
<ul>
<li>Proactively Defend: Continuously monitor, identify and remediate security weaknesses and configuration drift across our entire cloud footprint.</li>
</ul>
<ul>
<li>Be a Force Multiplier: Partner with infrastructure, application and product teams to embed security best practices into their workflows and secure environments holding mission-critical data.</li>
</ul>
<ul>
<li>Enable Scale and Reliability: Engineer systems and processes that ensure our platforms are highly available, resilient and prepared for rapid growth.</li>
</ul>
<ul>
<li>Serve as a Cloud Security Expert: Act as the go-to subject matter expert for teams across Anduril, providing guidance, mentorship and paved-road solutions for building securely in the cloud.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proven experience building and securing complex cloud environments, typically gained through 3+ years in a Cloud Security, DevOps or SRE role.</li>
</ul>
<ul>
<li>Deep proficiency in at least one major cloud provider (AWS, Azure or GCP).</li>
</ul>
<ul>
<li>Strong hands-on experience with Infrastructure as Code (e.g., Terraform, CloudFormation, Bicep).</li>
</ul>
<ul>
<li>Solid programming/scripting ability in one or more languages (e.g., Python, Go, Rust).</li>
</ul>
<ul>
<li>Firm understanding of public cloud networking principles (e.g., VPCs, subnets, routing, security groups).</li>
</ul>
<ul>
<li>Must be a U.S. Person and eligible to obtain and maintain a U.S. Top Secret security clearance.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience hardening and monitoring Kubernetes clusters (EKS, GKE, AKS).</li>
</ul>
<ul>
<li>Experience with cloud security posture management (CSPM) or threat detection tooling.</li>
</ul>
<ul>
<li>Familiarity with CI/CD pipelines and securing the software supply chain.</li>
</ul>
<ul>
<li>Knowledge of compliance frameworks such as FedRAMP, MRL, SOC 2 or CMMC.</li>
</ul>
<ul>
<li>On-premises network engineering experience.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$193,000 USD</Salaryrange>
      <Skills>Cloud Security, DevOps, SRE, Infrastructure as Code, Terraform, CloudFormation, Bicep, Python, Go, Rust, Public Cloud Networking, VPCs, Subnets, Routing, Security Groups, Kubernetes, Cloud Security Posture Management, Threat Detection Tooling, CI/CD Pipelines, Software Supply Chain Security, Compliance Frameworks, FedRAMP, MRL, SOC 2, CMMC, On-Premises Network Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5087348007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4ec68a8-fb9</externalid>
      <Title>Manager, Enterprise Security Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a security-focused leader to build and scale world-class defensive controls protecting the infrastructure that supports our defence technology products.</p>
<p>As a Manager, Enterprise Security Engineering, you will lead a high-performing team of security engineers, set technical direction, and establish clear standards for engineering excellence and ownership. You will define and execute the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A, and design and implement security controls across cloud, production, and corporate infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, mentoring, and growing a high-performing team of security engineers</li>
<li>Setting technical direction and establishing clear standards for engineering excellence and ownership</li>
<li>Partnering in hiring, performance management, and career development</li>
<li>Defining and executing the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A</li>
<li>Designing and implementing security controls across cloud, production, and corporate infrastructure</li>
<li>Developing tools and systems to improve security posture and operational efficiency</li>
<li>Conducting security architecture and design reviews for systems and applications</li>
<li>Partnering across infrastructure, IT, product, and security teams to reduce risk while enabling velocity</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Ability to work autonomously, take ownership of projects, and collaborate across teams</li>
<li>Demonstrated ability to translate ambiguous requirements into clear technical roadmaps and delivered outcomes</li>
<li>Have participated in or supported incident response events</li>
<li>Strong programming ability in one or more general-purpose languages (Python, Go, Rust, etc)</li>
<li>Experience with one or more infrastructure as code languages (e.g., Terraform, AWS CDK) in a production capacity</li>
<li>Experience conducting security architecture or design reviews around custom business applications</li>
<li>Strong understanding of modern attack vectors and defensive mitigation strategies</li>
<li>Experience working with cloud platforms and deploying applications through CI/CD pipelines</li>
<li>Experience implementing security controls across endpoints, corporate cloud environments, and internal infrastructure</li>
<li>Eligible to obtain and maintain a U.S. TS clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience building bespoke solutions in high-growth and high-complexity environments</li>
<li>Experience with AWS, Azure, or GCP security ecosystem and tooling</li>
<li>Strong experience with Linux operating systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>security engineering, infrastructure as code, cloud security, endpoint security, M&amp;A security, incident response, security architecture, CI/CD pipelines, Linux operating systems, AWS security ecosystem, Azure security ecosystem, GCP security ecosystem, containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril is a defence technology company that develops and manufactures advanced sensors and systems for military and commercial applications.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5070618007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e772a5e2-9a4</externalid>
      <Title>Lead Software Engineer, API/SDK</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our rapidly growing team in Seattle, WA. In this role, you will work on our developer portal and generated SDKs to enable our partners to write complex technical integrations for the Lattice platform.</p>
<p>This position requires deep technical expertise in API design, cloud architecture, and hands-on development experience. If you thrive on solving complex technical challenges, enjoy creating great developer ecosystems, and are passionate about creating mission-critical solutions at scale, then this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work on our developer portal to enhance partner engagement and streamline the integration process</li>
<li>Develop infrastructure to simplify the exposure of APIs and SDKs for external developers</li>
<li>Build and maintain sample applications, SDKs, and technical frameworks that enable partners to implement sophisticated solutions</li>
<li>Provide technical leadership during partner onboarding, guiding their engineering teams through complex integration scenarios</li>
<li>Create proof-of-concept applications and reference architectures that demonstrate advanced Lattice capabilities and integration patterns</li>
<li>Collaborate with engineering teams to influence the platform roadmap based on real-world implementation challenges</li>
<li>Conduct technical reviews of partner architectures and provide recommendations for optimization and scalability</li>
<li>Troubleshoot complex integration issues and provide hands-on technical support for mission-critical deployments</li>
<li>Evangelize best practices for building resilient, secure, and performant applications on the Lattice platform</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience as a Senior Software Engineer with customer-facing responsibilities</li>
<li>Strong programming experience in multiple languages (Python, Java, Go, C++, or similar) with demonstrated ability to build production-grade applications</li>
<li>Deep expertise in distributed systems architecture, including microservices, event-driven architectures, and API gateway patterns</li>
<li>Experience with CI/CD pipelines, infrastructure as code, and DevOps practices</li>
<li>Hands-on experience with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)</li>
<li>Proven track record of designing and implementing complex system integrations in enterprise environments</li>
<li>Experience with API technologies including REST, gRPC, GraphQL, and real-time communication protocols (WebSockets, message queues)</li>
<li>Strong understanding of security patterns, authentication/authorization frameworks, and data protection in distributed systems</li>
<li>Excellent technical communication skills with the ability to present complex architectural concepts to both technical and non-technical stakeholders</li>
<li>Must be a U.S. Person due to required access to U.S. export-controlled information or facilities</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience architecting solutions for defence, aerospace, or other mission-critical industries</li>
<li>Background in edge computing, IoT architectures, or real-time data processing systems</li>
<li>Knowledge of air-gapped environments, offline-first architectures, and high-availability system design</li>
<li>Open source contributions to architectural frameworks or developer tools</li>
<li>Experience mentoring engineering teams and leading technical design reviews</li>
<li>Advanced degree in Computer Science, Engineering, or related technical field</li>
</ul>
<p>Salary Range: $191,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>API design, cloud architecture, hands-on development experience, distributed systems architecture, CI/CD pipelines, infrastructure as code, DevOps practices, cloud platforms, containerization technologies, complex system integrations, API technologies, security patterns, authentication/authorization frameworks, data protection, edge computing, IoT architectures, real-time data processing systems, air-gapped environments, offline-first architectures, high-availability system design, open source contributions, mentoring engineering teams, leading technical design reviews</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4754841007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3c6419c4-a9b</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimization and scaling distributed systems</li>
</ul>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61fb5f38-a35</externalid>
      <Title>Manager, Enterprise Security Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a security-focused leader to build and scale world-class defensive controls protecting the infrastructure that supports our defense technology products.</p>
<p>As a Manager, Enterprise Security Engineering, you will lead a high-performing team of security engineers, set technical direction, and establish clear standards for engineering excellence and ownership.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, mentoring, and growing a high-performing team of security engineers</li>
<li>Setting technical direction and establishing clear standards for engineering excellence and ownership</li>
<li>Partnering in hiring, performance management, and career development</li>
<li>Defining and executing the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A</li>
<li>Designing and implementing security controls across cloud, production, and corporate infrastructure</li>
<li>Developing tools and systems to improve security posture and operational efficiency</li>
<li>Conducting security architecture and design reviews for systems and applications</li>
<li>Partnering across infrastructure, IT, product, and security teams to reduce risk while enabling velocity</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Ability to work autonomously, take ownership of projects, and collaborate across teams</li>
<li>Demonstrated ability to translate ambiguous requirements into clear technical roadmaps and delivered outcomes</li>
<li>Have participated in or supported incident response events</li>
<li>Strong programming ability in one or more general-purpose languages (Python, Go, Rust, etc)</li>
<li>Experience with one or more infrastructure as code languages (e.g., Terraform, AWS CDK) in a production capacity</li>
<li>Experience conducting security architecture or design reviews around custom business applications</li>
<li>Strong understanding of modern attack vectors and defensive mitigation strategies</li>
<li>Experience working with cloud platforms and deploying applications through CI/CD pipelines</li>
<li>Experience implementing security controls across endpoints, corporate cloud environments, and internal infrastructure</li>
<li>Eligible to obtain and maintain a U.S. TS clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience building bespoke solutions in high-growth and high-complexity environments</li>
<li>Experience with AWS, Azure, or GCP security ecosystem and tooling</li>
<li>Strong experience with Linux operating systems</li>
</ul>
<p>US Salary Range: $166,000-$220,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>security engineering, infrastructure as code, cloud security, endpoint security, incident response, AWS, Azure, GCP, Linux, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril is a defense technology company that develops and manufactures advanced sensors and software for military and commercial applications.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5075703007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c73d22c6-873</externalid>
      <Title>Senior Software Engineer, (Golang, K82 &amp; CI- Build Services)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI.</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work.</p>
<p>We&#39;re all in on this mission.</p>
<p>If you are too, let&#39;s talk.</p>
<p><strong>What You&#39;ll Own:</strong></p>
<ul>
<li>Unified Build Architectures: Design and implement modular, reusable build stages that define how all code at Okta is tested, secured, and packaged.</li>
<li>Systems Innovation: Solve deep scaling bottlenecks (e.g., Monorepo segmentation, dependency resolution) to accelerate thousands of developers.</li>
<li>Infrastructure as Code: Own the delivery of highly available build agents and artifact registries using Golang, Terraform, and AWS.</li>
<li>Engineering Excellence: Champion &#39;Build-it-once&#39; philosophies, creating self-healing systems that reduce operational toil and eliminate reactive support.</li>
</ul>
<p><strong>What We Are Looking For:</strong></p>
<ul>
<li>Experience: 6+ years in Platform or Infrastructure Engineering, specifically building large-scale CI/Build Platform.</li>
<li>Expertise: Advanced proficiency in Golang for tooling and Terraform for infrastructure orchestration.</li>
<li>Containerization: Mastery of Kubernetes (K8s) and container primitives for build execution.</li>
<li>Scale Mindset: A proven track record of investigating distributed system failures and delivering performant solutions at scale.</li>
<li>Ownership: You don&#39;t just write code; you own the reliability, cost-efficiency, and security guardrails of the entire ecosystem.</li>
</ul>
<p><strong>The Okta Experience</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate.</p>
<p>Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>
<p>We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>
<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>
<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process.</p>
<p>In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>
<p>Okta is committed to complying with applicable data privacy and security laws and regulations.</p>
<p>For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Terraform, Kubernetes, Container primitives, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7810108</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d3e5c3d-569</externalid>
      <Title>Senior Engineer, Datacenter Server Lifecycle</Title>
      <Description><![CDATA[<p>As a Senior Engineer on the Datacenter Machine Lifecycle team, you will own the end-to-end operational journey of every machine in our facility , from initial provisioning and deployment, across its working life, through maintenance and refresh, and all the way to decommissioning.</p>
<p>This is greenfield work: you will help define the processes, tooling, and operational standards that govern how we run and retire hardware at scale.</p>
<p>A distinguishing aspect of this role is its deep intersection with security. The machines in our datacenter handle some of the most sensitive workloads in AI , training frontier models and serving millions of users interacting with Claude.</p>
<p>Ensuring that every machine in the fleet is trusted, attested, and operating with a verified chain of integrity from the hardware up is a core part of the job, not an afterthought.</p>
<p>You will partner closely with our Infrastructure Security team to define and enforce trusted compute standards across the lifecycle, from secure provisioning through end-of-life handling.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the build-out of automation to support datacenters containing tens of thousands of servers.</li>
</ul>
<ul>
<li>Own and define the end-to-end machine lifecycle strategy , from provisioning and deployment through operation, maintenance, refresh, and decommissioning , and maintain automation and operational procedures for common lifecycle events (e.g. hardware failures, firmware upgrades, fleet rotations).</li>
</ul>
<ul>
<li>Partner closely with Infrastructure Security to design and enforce trusted compute standards across the machine lifecycle.</li>
</ul>
<ul>
<li>Work closely with our Networking team to ensure end-to-end connectivity across all sites.</li>
</ul>
<ul>
<li>Build and maintain tooling to track machine health, configuration, and operational status across the full datacenter fleet.</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 5+ years of experience in datacenter operations, hardware infrastructure management, or a closely related discipline.</li>
</ul>
<ul>
<li>Have deep, hands-on experience with server hardware , including rack deployment, cabling, troubleshooting, and understanding failure modes at scale.</li>
</ul>
<ul>
<li>Understand hardware lifecycle management end-to-end: asset tracking, provisioning workflows, maintenance scheduling, and decommissioning practices.</li>
</ul>
<ul>
<li>Have strong proficiency in at least one programming language (e.g., Python, Rust, Go, or Java).</li>
</ul>
<ul>
<li>Are comfortable navigating ambiguity and working independently to drive progress on complex, cross-functional problems.</li>
</ul>
<ul>
<li>Communicate clearly and can build consensus with a wide range of stakeholders.</li>
</ul>
<ul>
<li>Have working knowledge of modern cloud infrastructure, including Kubernetes, Infrastructure as Code, AWS, and GCP.</li>
</ul>
<ul>
<li>Are comfortable with occasional travel to datacenter sites across North America.</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Hands-on experience with GPU or AI accelerator hardware (e.g. NVIDIA A100/H100, AMD MI300, Google TPUs, or AWS Trainium) and an understanding of their operational demands.</li>
</ul>
<ul>
<li>Familiarity with modern provisioning tooling such as coreboot, LinuxBoot, or u-root.</li>
</ul>
<ul>
<li>Experience building or contributing to datacenter automation or fleet management platforms.</li>
</ul>
<ul>
<li>Experience building and deploying server operating system distributions across thousands of hosts.</li>
</ul>
<ul>
<li>A background in large-scale capacity planning and hardware refresh strategy, ideally at a hyperscaler or large cloud provider.</li>
</ul>
<ul>
<li>Experience with trusted compute and hardware security concepts such as secure boot, TPM, hardware attestation, and firmware verification , or a strong desire to develop deep expertise in this area.</li>
</ul>
<p>The annual compensation range for this role is £255,000-£325,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£255,000-£325,000 GBP</Salaryrange>
      <Skills>datacenter operations, hardware infrastructure management, server hardware, programming language, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, GPU or AI accelerator hardware, modern provisioning tooling, datacenter automation, fleet management platforms, trusted compute and hardware security concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5131038008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>782a1c68-325</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior DevOps Engineer to join our Infrastructure Engineering group. As a Senior DevOps Engineer, you will be responsible for innovation in infrastructure and automation for ZoomInfo Engineering. You will have a strong background in modern infrastructure, with a thorough understanding of industry best practices. You will have a high level of comfort participating in challenging technical discussions and advocating for best practices in a high-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Thorough, clear, concise documentation of new and existing standards, procedures, and automated workflows</li>
<li>Championing of best practices and standards around infrastructure configuration and management</li>
<li>Experience in creating internal products and managing their software development lifecycle</li>
<li>Deployment, configuration, and management of infrastructure via infrastructure as code</li>
<li>Working hands on with cloud infrastructure (AWS, Azure, and GCP)</li>
<li>Working hands on with container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE, etc.)</li>
<li>Configuration and management of Linux based tools and third-party cloud services</li>
<li>Continuous improvement of our infrastructure, ensuring that it is highly available and observable</li>
</ul>
<p>Minimum Requirements:</p>
<ul>
<li>Solid foundation of experience managing Linux systems in virtual environments (6+ years)</li>
<li>Deploying and maintaining highly available infrastructure in one or more Cloud providers (5+ years, AWS or GCP preferred)</li>
<li>Infrastructure as code using Terraform (4+ years)</li>
<li>Creating, deploying, maintaining, and troubleshooting Docker images (4+ years)</li>
<li>Scoping, deploying, maintaining and troubleshooting Kubernetes clusters (4+ years)</li>
<li>Developing and maintaining an active codebase in Go, Python preferably (3+ years)</li>
<li>Experience with PaaS technologies (5+ years, EKS and GKE preferred)</li>
<li>Maintaining monitoring and observability tools (Datadog, Prometheus preferred)</li>
<li>Thorough understanding of network infrastructure and concepts (VPNs, routers and routing protocols, TCP/IP, IPv4 and v6, UDP, OSI layers, etc.)</li>
<li>Experience with load balancing and proxy technologies (Istio, Nginx, HAProxy, Apache, Cloud load balancers, etc.)</li>
<li>Debugging and troubleshooting complex problems in cloud-native infrastructure.</li>
<li>Slack native mentality.</li>
<li>Bachelor’s Degree in Computer Science or a related technical discipline, or the equivalent combination of education, technical certifications, training, or work experience.</li>
</ul>
<p>Abilities Required:</p>
<ul>
<li>Demonstrated ability to learn new technologies quickly and independently</li>
<li>Strong technical, organizational and interpersonal skills</li>
<li>Strong written and verbal communication skills</li>
<li>Must be able to read, understand, and communicate complex problems and solutions in English over a textual medium (such as Slack)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux, Cloud infrastructure (AWS, Azure, GCP), Container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE), Infrastructure as code (Terraform), Go, Python, PaaS technologies (EKS, GKE), Monitoring and observability tools (Datadog, Prometheus)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a technology company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8287254002</Applyto>
      <Location>Ra&apos;anana, Israel</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>146ddf7d-edd</externalid>
      <Title>Network Security Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are seeking a seasoned Senior Network Security Engineer to join our dynamic security team. The ideal candidate will possess deep expertise in network security technologies, focusing on switching and routing systems within cloud-native and AI-focused infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Serve as a subject matter expert in network security, particularly firewalls, VPNs, IDS/IPS, routing protocols (e.g., BGP, OSPF), and switching technologies.</li>
<li>Manage and update firewall configurations across our enterprise network to align with operational and security needs.</li>
<li>Deploy new firewalls, switches, routers, and network security devices in response to evolving threats and demands.</li>
<li>Develop and propose innovative network security solutions to address operational challenges in routing and switching environments.</li>
<li>Enhance security processes through thorough documentation and change management.</li>
<li>Act as the primary resolver for complex network security issues, including escalation support.</li>
<li>Ensure network security systems, switches, and routers are up-to-date with patches, firmware, and maintenance.</li>
<li>Monitor and respond to security events in cloud environments (e.g., AWS, GCP, Azure, Datacenter), with emphasis on network traffic analysis.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, Information Systems, or a related field.</li>
<li>4+ years of experience in network security engineering, with hands-on focus on switching and routing.</li>
<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>
<li>Strong understanding of network security principles, protocols (e.g., TCP/IP, VLANs, ACLs), and best practices for secure routing and switching.</li>
<li>Proficiency in at least one major cloud platform (AWS, GCP, or Azure) and its network security services (e.g., VPCs, Security Groups).</li>
<li>Experience with network analysis tools such as Wireshark, tcpdump; and vendors including Cisco, Juniper, Palo Alto Networks.</li>
<li>Familiarity with scripting languages (e.g., Python, Bash) for automation of network security tasks.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Relevant network-specific certifications (e.g., CCNP Security, CCIE Security, JNCIP-SEC, PCNSE).</li>
<li>Experience in multi-cloud environments and Infrastructure as Code tools like Terraform for network provisioning.</li>
<li>Knowledge of DevSecOps practices tailored to network security integration.</li>
<li>Experience building custom tools or integrations for enhancing network security operations.</li>
<li>Interest in leveraging AI for network threat detection and automation.</li>
<li>Contributions to open-source projects in network security or related tools.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>firewalls, VPNs, IDS/IPS, routing protocols, switching technologies, cloud platforms, network security services, network analysis tools, scripting languages, CCNP Security, CCIE Security, JNCIP-SEC, PCNSE, multi-cloud environments, Infrastructure as Code, DevSecOps, custom tools, AI for network threat detection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that accurately understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4800712007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6984004d-b3f</externalid>
      <Title>Intermediate Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab with assurance by building and supporting the deployment tooling, infrastructure, and automation behind how GitLab is installed, upgraded, and operated.</p>
<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (GET), and the GitLab Operator to improve reliability, security, and scalability in production-grade environments. This is a hands-on role where you&#39;ll partner with Distribution Engineers, Site Reliability Engineers, Release Managers, Security, and Development teams to make self-managed GitLab easier to use across a wide range of platforms.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolve Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support new GitLab features and architectures</li>
</ul>
<ul>
<li>Improve installation, upgrade, and validation automation for large-scale self-managed GitLab deployments</li>
</ul>
<p>Maintain and improve the Omnibus GitLab package so GitLab components work reliably in self-managed deployments.</p>
<p>Develop and support GitLab Helm Charts for scalable, production-ready Kubernetes deployments.</p>
<p>Enhance the GitLab Environment Toolkit (GET) and validated reference architectures used by enterprise and internal users.</p>
<p>Support and extend the GitLab Operator for Kubernetes-native lifecycle management of GitLab installations.</p>
<p>Improve the installation, upgrade, and day-to-day operating experience across supported self-managed platforms.</p>
<p>Collaborate with Security to address vulnerabilities and strengthen secure defaults and configurations across the deployment stack.</p>
<p>Build and maintain automation and continuous integration and continuous deployment pipelines that validate deployment tooling across Omnibus, Charts, GET, and the Operator.</p>
<p>Partner with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features and keep user-facing documentation accurate and useful.</p>
<p>Experience building and maintaining backend services in production environments, especially in deployment, infrastructure, or platform tooling.</p>
<p>Practical knowledge of Kubernetes operations, including authoring and maintaining Helm charts.</p>
<p>Proficiency with Ruby and Go, along with scripting skills to automate workflows and tooling.</p>
<p>Familiarity with Terraform and infrastructure as code practices across cloud and on-premises environments.</p>
<p>Hands-on experience with relational databases, especially PostgreSQL, including performance and reliability considerations.</p>
<p>Understanding of secure, scalable, and supportable deployment practices, along with observability tools such as Prometheus and Grafana.</p>
<p>Experience collaborating in large codebases and distributed teams, including writing clear user-facing documentation and implementation guides.</p>
<p>Openness to learning new technologies and applying transferable skills across different parts of the GitLab deployment stack.</p>
<p>The Upgrades team is part of GitLab Delivery and delivers GitLab to self-managed users through supported, validated deployment tooling. The team maintains Omnibus GitLab, Helm Charts, the GitLab Operator, and the GitLab Environment Toolkit (GET) to help self-managed users deploy GitLab securely and reliably across diverse environments. You&#39;ll join a distributed group of backend engineers that works asynchronously across time zones and collaborates closely with Site Reliability Engineering, Release, Security, and Development teams. The team is focused on improving installation and upgrade workflows, strengthening automation and security, and helping self-managed customers run GitLab successfully at any scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Go, Kubernetes, Helm charts, Terraform, infrastructure as code, PostgreSQL, relational databases, observability tools, Prometheus, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by over 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463951002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5834e3ad-7b2</externalid>
      <Title>Senior Site Reliability Engineer - Security and Data Systems (Federal)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Senior Site Reliability Engineer (SRE) - Security and Data Systems</strong></p>
<p>Our company is seeking a highly skilled Senior Site Reliability Engineer to join our team. We are a SaaS company specializing in securing large-scale systems. This role is a blend of software engineering and systems administration, where you&#39;ll be responsible for building and maintaining highly reliable, scalable, and secure infrastructure. You will be a key contributor, applying your expertise to automate manual processes and proactively solve complex problems before they become incidents, handling incidents, and includes on-call shifts.</p>
<p>*This position requires the ability to access U.S. National Security information. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Platform &amp; Reliability: Design, build, and maintain the core infrastructure that underpins our security SaaS offerings, ensuring high availability, performance, and scalability. This includes building and operating the tooling for our Snowflake data systems.</li>
</ul>
<ul>
<li>Automation: Develop robust automation using code to eliminate toil and ensure consistency across our environments. You&#39;ll be a key driver in automating everything from infrastructure provisioning to application deployment and incident response.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Work closely with our security teams to embed a security-first mindset into all our processes and infrastructure. You will be responsible for ensuring our systems and data platforms are compliant with industry standards.</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and be a primary responder for critical incidents, leading root cause analysis and implementing preventative measures to ensure issues don&#39;t recur.</li>
</ul>
<ul>
<li>Collaboration: Partner with development, data science, and security teams to provide expert guidance on architectural decisions, best practices, and the implementation of new services.</li>
</ul>
<p><strong>Key Skills &amp; Qualifications</strong></p>
<ul>
<li>U.S. Person Status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee)</li>
</ul>
<ul>
<li>Strong Coding Skills: You are a developer at heart and are comfortable writing production-level code to solve complex operational challenges.</li>
</ul>
<ul>
<li>Infrastructure as Code (IaC): Deep experience with Terraform for provisioning and managing cloud infrastructure and services.</li>
</ul>
<ul>
<li>Continuous Delivery: Familiarity with modern CI/CD practices and tools, particularly Spinnaker, to automate and standardize our release pipelines.</li>
</ul>
<ul>
<li>Containerization &amp; Orchestration: Expertise in container technologies and hands-on experience managing large-scale, production-ready clusters with Kubernetes.</li>
</ul>
<ul>
<li>Database Migrations: Experience with database schema management tools like Flyway for safely and reliably handling database changes.</li>
</ul>
<ul>
<li>Data Systems: Direct experience with large-scale data systems, specifically with the Snowflake platform.</li>
</ul>
<ul>
<li>AI/ML Experience (a plus): Experience or a strong interest in AI/ML, particularly how these technologies can be applied to improve reliability, security, and operational efficiency (e.g., AIOps, predictive analysis).</li>
</ul>
<ul>
<li>Problem-Solving: Excellent analytical and problem-solving skills with a proactive approach to identifying and addressing potential issues.</li>
</ul>
<p>This role requires in-person onboarding and travel to our San Francisco Office during the first week of employment.</p>
<p>#LI-Hybrid #LI-TM</p>
<p>(P18058_3355591)</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$147,000-$202,400 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$147,000-$202,400 USD</Salaryrange>
      <Skills>U.S. Person Status, Strong Coding Skills, Infrastructure as Code (IaC), Continuous Delivery, Containerization &amp; Orchestration, Database Migrations, Data Systems, AI/ML Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a SaaS company specializing in securing large-scale systems.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7591606</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ec3e47f7-26c</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Infrastructure Engineering Automation team. As a key member of our team, you will lead the development of robust tooling and AI-powered solutions underpinned by a centralized source of truth for all infrastructure data.</p>
<p>Your primary focus will be on two core pillars: Orchestration &amp; Patterns, and Infrastructure Intelligence. In the former, you will build the platform that allows teams to productionize their own automations using durable execution frameworks and standardized IaC patterns. In the latter, you will create, source, and enrich critical infrastructure and organizational data and make it accessible and actionable for both humans and AI agents.</p>
<p>To succeed in this role, you will need to design and develop high-performance internal tools and APIs using Go (Golang) to manage infrastructure metadata and lifecycle. You will also design complex, long-running workflows using durable execution frameworks (like Temporal) to orchestrate tasks across Git, Cloud providers, and CI/CD pipelines. Additionally, you will develop and implement Model Context Protocol (MCP) servers and Agentic AI workflows to automate the creation, upgrading, and auditing of infrastructure configurations.</p>
<p>You will collaborate with Infrastructure, Security, and Development teams to design &#39;Infrastructure Intelligence&#39; tools that provide deep insights into asset ownership and EOL lifecycles. Your expertise in Go (Golang), Temporal, MCP, and A2A frameworks will be crucial in driving the success of this project.</p>
<p>If you&#39;re a motivated and knowledgeable software engineer with a passion for building infrastructure tools, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go (Golang), Temporal, MCP (Model Context Protocol), A2A (Agent-to-Agent) frameworks, Infrastructure as Code (IaC), Cloud providers (GCP and AWS), CI/CD tools (GitHub Actions, Helm, ArgoCD)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8400168002</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a326112-c31</externalid>
      <Title>Professional Services, Technical Architect - West</Title>
      <Description><![CDATA[<p>As a Professional Services Technical Architect at GitLab, you&#39;ll lead the technical direction of customer engagements from early scoping and discovery through delivery. You&#39;ll design high-level architectures and implementation plans for GitLab infrastructure and functionality, and ensure deliverables align with customer requirements and the Statement of Work (SOW).</p>
<p>You&#39;ll coordinate and guide implementation work across GitLab Professional Services and partner consultants, support customers deploying GitLab in cloud and on-premises environments using tools like Terraform, Ansible, and the GitLab Environment Toolkit, and perform migrations to GitLab using Congregate.</p>
<p>You&#39;ll provide DevOps and DevSecOps consulting and best practices, contribute reusable collateral such as documentation, delivery kits, and training materials, and share product release updates to help the team deliver consistent outcomes.</p>
<p>This role combines deep technical expertise with customer-facing leadership in GitLab&#39;s remote, asynchronous, and values-driven environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Designing and delivering GitLab reference architecture implementations for both self-managed and cloud environments, using infrastructure as code practices</li>
<li>Leading source code management and CI/CD migrations to GitLab, including large-scale enterprise moves using Congregate</li>
<li>Building repeatable delivery kits, documentation, and enablement materials that help Professional Services and partners deploy and adopt GitLab best practices</li>
<li>Lead the full technical delivery lifecycle for GitLab Professional Services engagements, from early scoping and technical discovery through implementation and handoff.</li>
<li>Produce high-level and detailed technical designs for GitLab infrastructure and functionality, ensuring deliverables align with customer requirements and expectations.</li>
<li>Evaluate and communicate scalability and security considerations (including compliance constraints) in proposed GitLab reference architectures and implementation plans.</li>
<li>Deploy and configure GitLab in customer environments, including on-premises and major cloud providers, using Terraform, Ansible, and the GitLab Environment Toolkit (GET) aligned to reference architectures.</li>
<li>Plan and execute source system migrations to GitLab using Congregate, partnering closely with customer stakeholders to reduce risk, protect data integrity, and minimize downtime.</li>
<li>Provide DevOps and DevSecOps consulting and best-practice guidance, including advising on internal frameworks such as the Delivery Governance Framework (DGF) and GitLab Flow.</li>
<li>Coordinate and oversee implementation work across GitLab team members, partners, and customer points of contact (POCs), ensuring clear asynchronous communication and decision capture, effective execution, and high-quality outcomes.</li>
<li>Mentor Professional Services and partner consultants by contributing documentation, delivery kits, and training materials, and by leading enablement sessions on how to deliver and position service offerings.</li>
<li>Maintain and improve delivery automation assets with an emphasis on code cleanliness, maintainability, and appropriate unit/integration testing.</li>
<li>Support scoping and Statement of Work (SOW) creation with Professional Services Engagement Managers, stay current on monthly GitLab releases, and help Regional Delivery Managers with technical vetting and staffing assessments while maintaining a 55% billable utilization.</li>
<li>Review and provide input to Professional Services training materials and presentations</li>
<li>Develop case studies, presentations, design documentation, and best-practice methodologies</li>
<li>Work closely with customer project teams to ensure accurate task-level articulation of work required</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong written and verbal communication skills, including the ability to lead technical discussions with customers and partners and communicate risks/trade-offs clearly in an asynchronous environment.</li>
<li>Prefer San Francisco, Bay Area, PST/MST time zones.</li>
<li>Up to 50% travel at times.</li>
<li>Demonstrated experience delivering two or more of the following consulting services: source code management migration, cloud architecture, DevOps engineering, or continuous integration and continuous delivery (CI/CD) consulting services.</li>
<li>Enterprise software development experience, with the ability to translate requirements into clear technical designs and implementation plans.</li>
<li>Progressive DevOps platform experience, including designing and implementing reliable, scalable systems with clear performance and security trade-offs.</li>
<li>Hands-on experience deploying and managing infrastructure in cloud providers and on-premises environments, including using tools such as Terraform and Ansible.</li>
<li>Ability to write clean, maintainable automation/integration code (e.g., Terraform modules, Ansible roles, scripts) and validate changes with appropriate testing and code review.</li>
<li>Experience performing migrations to GitLab, including using Congregate or similar migration tooling.</li>
<li>Working knowledge of data consistency and integrity concepts (e.g., ACID properties) and how they impact migration design and performance trade-offs.</li>
<li>Strong problem-solving, decision-making, organizational, and time management skills, with the ability to manage multiple priorities with minimal supervision.</li>
<li>Comfort working in a remote, asynchronous environment, using documented decisions (e.g., issues, proposals, and runbooks) to keep work unblocked across time zones while collaborating effectively across GitLab team members, partners, and customer stakeholders.</li>
<li>Bachelor&#39;s Degree in Information Technology, Computer Science, or other advanced technical degree, or equivalent experience</li>
</ul>
<p>About the Team: The Professional Services Technical Architect is part of GitLab&#39;s Professional Services organization. We partner with customers who are transitioning to GitLab, expanding how they use an existing GitLab installation, or planning complex upgrades to their infrastructure and processes. Our team brings a diverse set of skills across GitLab deployment, maintenance, and day-to-day usage, along with deep technical knowledge of adjacent tools and platforms. Working closely with Engagement Managers, partner consultants, and customer stakeholders, we focus on delivering high-quality outcomes, sharing best practices, and helping customers realize faster time to value from their GitLab investment. We operate in GitLab&#39;s remote, asynchronous, and values-driven environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Terraform, Ansible, GitLab Environment Toolkit, Congregate, DevOps, DevSecOps, Cloud architecture, Source code management, Continuous integration and continuous delivery, Infrastructure as code, Scalability, Security, Compliance, Data consistency and integrity, ACID properties</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8452994002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>542096f5-82b</externalid>
      <Title>Business Intelligence Manager</Title>
      <Description><![CDATA[<p>As a Business Intelligence Manager, you will play a critical role in building secure, interactive data and AI applications hosted natively on the Databricks platform. You will design, build, and maintain scalable data web applications, AI chatbots, and custom operational interfaces using frameworks like Streamlit, React, and FastAPI. By leveraging Databricks Apps&#39; serverless infrastructure, you will eliminate the need for external hosting and empower business users to make informed decisions by bridging the gap between raw data and solutions using your engineering prowess, Databricks apps, Databricks SQL, Lakebase and AgentBricks.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Build: You will design and develop robust frontend interfaces and API backends (e.g., FastAPI routing user queries to model-serving endpoints). You will build solutions ranging from data-rich dashboards to enterprise chat solutions powered by the Mosaic AI Agent Framework.</li>
</ul>
<ul>
<li>Architect: You will design secure and scalable application architectures that can suffice GTM requirements on building custom SaaS applications.</li>
</ul>
<ul>
<li>Scale: You will create scalable applications that seamlessly connect to Databricks SQL via the Statement Execution API or Databricks SDK. You will establish CI/CD pipelines using Declarative Automation Bundles (DABs) to automate deployment across development, staging, and production workspaces.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>You have 5+ years of experience working as a Software Engineer, Data App Developer, or Full-Stack Engineer building interactive web applications.</li>
</ul>
<ul>
<li>You are proficient in Python, DBSQL and/or Node.js. Experience with frameworks like Streamlit, Dash, Flask, FastAPI, React, or Express is required.</li>
</ul>
<ul>
<li>You know the Databricks ecosystem. Familiarity with Unity Catalog, Databricks SQL, Databricks SDK for Python, and Model Serving is highly preferred.</li>
</ul>
<ul>
<li>You have built for scale and security. Experience with CI/CD tools, Infrastructure as Code (specifically Databricks Asset Bundles), and implementing secure OAuth flows.</li>
</ul>
<ul>
<li>You are passionate about applying AI. Experience integrating LLMs or Mosaic AI Agent Frameworks into application backends to deliver intelligent chat and RAG solutions.</li>
</ul>
<ul>
<li>You excel in a collaborative environment. You can translate stakeholder requirements into intuitive user interfaces, working through dependencies and troubleshooting deployment errors or telemetry logs.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$158,200-$217,450 USD</Salaryrange>
      <Skills>Python, DBSQL, Node.js, Streamlit, React, FastAPI, Unity Catalog, Databricks SQL, Databricks SDK for Python, Model Serving, CI/CD tools, Infrastructure as Code, OAuth flows, LLMs, Mosaic AI Agent Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data intelligence platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8501030002</Applyto>
      <Location>New York; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2fe8215c-605</externalid>
      <Title>Senior Software Engineer, Storage Infrastructure</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Emerging Technologies &amp; Incubation (ETI)</p>
<p>ETI is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers. Cloudflare&#39;s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>
<p>About the Team</p>
<p>ETI&#39;s Storage Infrastructure team is responsible for the core storage layer that underpins many of ETI&#39;s stateful services. Our scope ranges from managing the physical hardware to operating the distributed databases and storage systems built upon it. We run this infrastructure globally across Cloudflare&#39;s network, which presents unique and complex engineering puzzles. We navigate efficiently expanding storage capacity, optimizing rebuild operations, and coordinating operations across failure domains to uphold durability.</p>
<p>While other service teams focus on product development, our mission is to ensure the underlying storage is reliable, performant, and scalable. You&#39;ll be joining a highly motivated team that is building the next generation of distributed storage services.</p>
<p>Responsibilities</p>
<p>In this role, you will help build and operate the next generation of globally distributed storage systems. You will own your code from inception to release, delivering solutions at all layers of the stack. On any given day, you might write a design document for a new provisioning system, model failure domain dependencies across edge locations, benchmark new storage hardware, build standardized observability and runbooks for distributed database clusters, or automate operational toil through purpose-built tooling and intelligent automation.</p>
<p>You can expect to interact with a variety of languages and technologies including Rust, Go, Saltstack, and Terraform.</p>
<p>Examples of desirable skills, knowledge, and experience</p>
<ul>
<li>Strong programming skills with languages like Rust, Go, or Python</li>
<li>A solid understanding of distributed systems concepts such as consistency, consensus, data replication, fault tolerance, and partition tolerance</li>
<li>Experience with distributed databases and storage systems</li>
<li>Experience with infrastructure configuration tooling and infrastructure as code</li>
<li>Familiarity with storage fundamentals: block devices, filesystems, SSD characteristics</li>
<li>Experience building and maintaining high-throughput, low-latency systems</li>
<li>Understanding of network fundamentals as they relate to distributed storage -- bandwidth constraints, latency tradeoffs, cross-datacenter replication</li>
<li>Strong written and verbal communication skills and ability to explain technical decisions clearly</li>
<li>Comfortable operating in fast-paced environments with tight deadlines and evolving priorities</li>
</ul>
<p>Benefits</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits</p>
<ul>
<li>Medical/Rx Insurance</li>
<li>Dental Insurance</li>
<li>Vision Insurance</li>
<li>Flexible Spending Accounts</li>
<li>Commuter Spending Accounts</li>
<li>Fertility &amp; Family Forming Benefits</li>
<li>On-demand mental health support and Employee Assistance Program</li>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
<li>Life &amp; Accident Insurance</li>
<li>401(k) Retirement Savings Plan</li>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo</p>
<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project</p>
<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1</p>
<p>We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, Python, Distributed systems, Consistency, Consensus, Data replication, Fault tolerance, Partition tolerance, Distributed databases, Storage systems, Infrastructure configuration tooling, Infrastructure as code, Storage fundamentals, Block devices, Filesystems, SSD characteristics, High-throughput systems, Low-latency systems, Network fundamentals, Bandwidth constraints, Latency tradeoffs, Cross-datacenter replication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet. It runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7629805</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a68e8bd-dd5</externalid>
      <Title>Consulting Architect - Observability</Title>
      <Description><![CDATA[<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You will translate business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack. You will lead end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation. You will partner with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</p>
<p>You will provide technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles. You will collaborate cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement. You will capture and share best practices, lessons learned, and solution patterns across the Elastic Services community.</p>
<p>You will guide customers in using Elastic Agents, Beats, Logstash time-series data ingestion, stream processing, and normalisation, and related technologies. You will design and implement custom dashboards, visualisations, and alerting for critical observability use cases in Kibana. You will optimise ingestion pipelines for performance, scalability, and resiliency at enterprise scale.</p>
<p>You will have 5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains. You will have strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash. You will have knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</p>
<p>You will have understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs. You will have experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code. You will have familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</p>
<p>You will have proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale. You will have hands-on expertise in distributed systems and large-scale infrastructure. You will have ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</p>
<p>You will have experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene. You will have strong communication and presentation skills, with experience engaging directly with customers. You will have a Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</p>
<p>You will be comfortable working in highly distributed teams, both remote and on-site when needed. You may require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$133,100-$210,600 USD</Salaryrange>
      <Skills>observability, monitoring, time-series data ingestion, processing, pipelines, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, ingestion optimisation strategies, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7763314</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>396fe53d-121</externalid>
      <Title>Consulting Architect - Observability</Title>
      <Description><![CDATA[<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic’s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Observability platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack.</li>
<li>Leading end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation.</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles.</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement.</li>
<li>Capturing and sharing best practices, lessons learned, and solution patterns across the Elastic Services community.</li>
<li>Contributing to internal enablement, mentoring, and a culture of continuous learning and collaboration</li>
</ul>
<p>Required skills include:</p>
<ul>
<li>5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains.</li>
<li>Expertise in the Telecommunications domain, especially with Mobile networks and devices.</li>
<li>Strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash.</li>
<li>Knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</li>
<li>Understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs.</li>
<li>Experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code.</li>
<li>Familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</li>
<li>Proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale.</li>
<li>Hands-on expertise in distributed systems and large-scale infrastructure.</li>
<li>Ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</li>
<li>Experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene.</li>
<li>Strong communication and presentation skills, with experience engaging directly with customers.</li>
<li>Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</li>
<li>Comfortable working in highly distributed teams, both remote and on-site when needed.</li>
<li>May require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>observability, monitoring, Elastic Stack, time-series data ingestion, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that enables everyone to find the answers they need in real time, using all their data, at scale. The company&apos;s products are used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7440232</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>73e0f7a0-d1b</externalid>
      <Title>Infrastructure Engineer, Sandboxing</Title>
      <Description><![CDATA[<p>We are seeking an experienced Infrastructure Engineer to join our Sandboxing team within the Research organization. In this role, you&#39;ll build and scale the systems that enable researchers to safely execute and experiment with AI-generated code and interactions in isolated environments.</p>
<p>As our models become more capable, the infrastructure supporting secure execution environments becomes increasingly critical. You&#39;ll work on distributed systems that must operate reliably at significant scale while maintaining strong security boundaries. Your work will directly support Anthropic&#39;s mission to develop AI systems that are safe, beneficial, and trustworthy.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and operate distributed backend systems that power secure sandboxed execution environments</li>
<li>Scale infrastructure to meet growing research and product demands while maintaining reliability and performance</li>
<li>Implement and maintain serverless architectures and container orchestration systems</li>
<li>Collaborate with research teams to understand requirements and translate them into robust infrastructure solutions</li>
<li>Develop monitoring, alerting, and observability systems to ensure operational excellence</li>
<li>Participate in on-call rotations and incident response to maintain system reliability</li>
<li>Contribute to infrastructure automation and tooling that improves developer productivity</li>
<li>Partner with security teams to ensure sandboxing infrastructure maintains appropriate isolation guarantees</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience building and operating backend infrastructure at scale</li>
<li>Have deep expertise in distributed systems design and implementation</li>
<li>Have strong operational experience, including debugging complex production issues</li>
<li>Are proficient with cloud platforms, particularly GCP/GCS (experience with AWS or Azure is also valuable)</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and understand their security implications</li>
<li>Are comfortable working with infrastructure as code and modern DevOps practices</li>
<li>Have strong programming skills in languages such as Python, Go, or Rust</li>
<li>Are results-oriented with a bias towards flexibility and impact</li>
<li>Care about the societal impacts of your work and are motivated by Anthropic&#39;s mission</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Serverless architectures and functions-as-a-service platforms (Cloud Functions, Cloud Run, Lambda)</li>
<li>Designing and implementing secure multi-tenant systems</li>
<li>High-performance computing environments or ML infrastructure</li>
<li>Linux systems internals, including namespaces, cgroups, and seccomp</li>
<li>Network security and isolation techniques</li>
<li>Building systems that support research workflows and rapid iteration</li>
</ul>
<p>The annual compensation range for this role is $300,000-$405,000 USD.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems design and implementation, cloud platforms (GCP/GCS), containerization technologies (Docker, Kubernetes), infrastructure as code and modern DevOps practices, strong programming skills in languages such as Python, Go, or Rust, serverless architectures and functions-as-a-service platforms, secure multi-tenant systems, high-performance computing environments or ML infrastructure, Linux systems internals, network security and isolation techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030680008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04884ef5-f9e</externalid>
      <Title>Software Engineer, Compute (8+ YOE)</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced software engineer to help lead the next phase of platform maturity in how we run Kubernetes at Airtable. As a member of the Compute Platform team, you&#39;ll be responsible for building and evolving the infrastructure that powers Airtable&#39;s services at scale.</p>
<p>Your primary focus will be on designing, implementing, and scaling core Kubernetes platform capabilities used across ~70 clusters, spread across multiple environments. You&#39;ll also lead foundational modernization efforts, such as migrating to a new CNI plugin to overhaul IP security rule management across clusters and regions.</p>
<p>In addition to your technical expertise, you&#39;ll collaborate closely with product and security teams to power a rapidly growing enterprise business. You&#39;ll spend roughly 70% of your time in hands-on engineering and 30% in design reviews, mentorship, and cross-team collaboration.</p>
<p>To succeed in this role, you&#39;ll need 8+ years of software engineering experience, with deep expertise building and operating a Kubernetes-based internal service platform. You&#39;ll also need a strong understanding of Kubernetes internals, including controllers/operators, CRDs, networking, and cluster architecture.</p>
<p>If you&#39;re excited about building internal platforms, shaping infrastructure strategy, and partnering closely with product and security teams, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Typescript, Golang, Cloud Native Infrastructure, CI/CD, Infrastructure as Code, Terraform, CloudFormation, OpenTofu, Pulumi, AWS infrastructure, EKS, Spinnaker, ArgoCD, Flux, Jenkins</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers organisations to transform how work gets done. Over 500,000 organisations, including 80% of the Fortune 100, rely on Airtable.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8442397002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5eb1737d-7a1</externalid>
      <Title>GRC Engineering Manager</Title>
      <Description><![CDATA[<p>We are seeking a GRC Engineering Manager to join our GRC organization and build the technical foundation for how we scale our risk and compliance programs.</p>
<p>In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems. This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic.</p>
<p>The core challenge is a data problem: compliance information lives across dozens of systems,cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories,and your job is to design systems that bring it together, normalize it, and make it actionable.</p>
<p>Success in this role comes from understanding how systems connect and how data flows between them, not from writing code yourself. At Anthropic, you&#39;ll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort.</p>
<p>You&#39;ll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments. Working closely with Security, IT, and Engineering teams, you&#39;ll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn&#39;t otherwise be possible.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the team that establishes foundational GRC processes and architecture.</li>
<li>Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.</li>
<li>Build data pipelines that aggregate risk, control, and asset information from across our technology stack.</li>
<li>Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.</li>
<li>Translate written policies and compliance requirements into policy-as-code,working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.</li>
<li>Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.</li>
<li>Design and deploy agentic AI workflows that extend team capacity, using Claude to serve as a virtual GRC analyst to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.</li>
<li>Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines,working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.</li>
<li>Build and lead an AI-forward GRC engineering function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>12+ years of total experience and 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.</li>
<li>A systems thinker first. You understand how complex environments work: how data flows between systems, where integration points exist, what breaks when systems don&#39;t talk to each other.</li>
<li>5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation.</li>
<li>A relentless focus on data integration: you understand how to pull data from multiple sources, normalize it, join it meaningfully, and surface insights.</li>
<li>Strong analytical and problem-solving skills with attention to detail necessary for compliance work, balanced with pragmatism about risk-based prioritization in fast-paced environments.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience designing or implementing AI-powered automation, agentic workflows, or LLM-based tooling in operational contexts.</li>
<li>Experience with GRC platforms such as ServiceNow GRC, Vanta, Drata, OneTrust, RSA Archer, or similar tools including configuration, customization, and integration capabilities.</li>
<li>Familiarity with scripting languages (Python or similar) for automation tasks, API interactions, and data transformation.</li>
<li>Prior experience in high-growth startup environments demonstrating ability to build scalable processes and adapt quickly to changing requirements and priorities.</li>
<li>Familiarity with Infrastructure as Code tools (Terraform, CloudFormation, Ansible) and DevSecOps practices including CI/CD pipeline integration and policy-as-code implementations.</li>
<li>Familiarity with cloud platforms (AWS, GCP, Azure) and an understanding of how compliance-relevant data can be extracted from their APIs and logging systems.</li>
</ul>
<p><strong>Deadline to Apply:</strong> None, applications will be received on a rolling basis.</p>
<p><strong>Annual Compensation Range:</strong> $405,000-$405,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$405,000 USD</Salaryrange>
      <Skills>GRC, Automation, Data Pipelines, System Integrations, Compliance, Risk Management, Audit Programs, Agentic AI, Policy-as-Code, DevSecOps, Cloud Platforms, APIs, Logging Systems, AI-Powered Automation, LLM-Based Tooling, GRC Platforms, Scripting Languages, Infrastructure as Code, CI/CD Pipeline Integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing company developing AI systems. It aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980335008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bdf9dc88-fbe</externalid>
      <Title>Infrastructure Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking a talented and motivated Cloud/Infrastructure Security Engineer to join our security team.</p>
<p>In this role, you will design, implement, and maintain secure cloud infrastructure and ensure the integrity of our cloud-native applications.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement secure cloud architectures across multiple cloud platforms (e.g., AWS, GCP, Azure)</li>
<li>Develop and maintain Infrastructure as Code (IaC) templates with embedded security controls</li>
<li>Conduct regular security assessments and audits of cloud infrastructure and services</li>
<li>Implement and manage cloud security tools and services (e.g., CSPM, CWPP, CASB)</li>
<li>Collaborate with development teams to ensure security best practices are integrated into CI/CD pipelines</li>
<li>Monitor and respond to security events and incidents in cloud environments</li>
<li>Develop and maintain cloud security policies, standards, and procedures</li>
<li>Stay current with emerging cloud security threats and mitigation strategies</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, or a related field</li>
<li>3-5 years of experience in cloud security or related roles</li>
<li>Strong understanding of cloud security principles, compliance frameworks, and best practices</li>
<li>Proficiency in at least one cloud platform (AWS, GCP, or Azure) and associated security services</li>
<li>Experience with Infrastructure as Code tools (e.g., Terraform, CloudFormation)</li>
<li>Familiarity with containerization technologies and their security implications</li>
<li>Knowledge of network security concepts and protocols</li>
<li>Experience with scripting languages (e.g., Python, Bash) for automation and tool development</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Relevant security certifications (e.g., CCSP, CSSK, AWS Security Specialty)</li>
<li>Experience with multi-cloud environments and cloud-to-cloud security</li>
<li>Knowledge of DevSecOps practices and tools</li>
<li>Experience with Kubernetes and container security</li>
<li>Experience in building custom cloud security tools or integrations</li>
<li>Interest in leveraging AI for cloud security monitoring and automation</li>
<li>Contributions to open-source cloud security projects</li>
<li>Experience with securing AI/ML workloads in cloud environments</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$200,000 - $340,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$200,000 - $340,000 USD</Salaryrange>
      <Skills>Cloud security principles, Compliance frameworks, Best practices, Cloud platform (AWS, GCP, or Azure), Infrastructure as Code tools (Terraform, CloudFormation), Relevant security certifications (CCSP, CSSK, AWS Security Specialty), Multi-cloud environments and cloud-to-cloud security, DevSecOps practices and tools, Kubernetes and container security, Building custom cloud security tools or integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090998007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42187d42-78e</externalid>
      <Title>Staff Engineer (Backend, DevOps, Infrastructure)</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property managers alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.</p>
<p>Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We&#39;re on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.</p>
<p>As a Staff Engineer, you will:</p>
<p>Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.</p>
<p>Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.</p>
<p>Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.</p>
<p>You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you&#39;ll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.</p>
<p>You&#39;ll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding.</p>
<p>As our first US-based engineer, you&#39;ll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform.</p>
<p>You&#39;ll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems.</p>
<p>Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.</p>
<p>We&#39;re looking for a Staff Engineer to help us bring that future to life. This is not just another dev role. You&#39;ll be hands-on shaping the technical DNA of Zuma. You&#39;ll architect critical systems, tame legacy code, build net-new AI-powered experiences, and lay down the patterns future engineers will inherit.</p>
<p>If you&#39;re obsessed with building real products people use, especially products powered by LLMs, this might be your playground.</p>
<p><strong>Why This Could Be Your Dream Role</strong></p>
<ul>
<li>You&#39;ll work directly with cutting-edge LLM technology in a real-world application</li>
<li>You want to work at a company where customers feel your impact every day</li>
<li>You&#39;ll architect AI-powered systems that are transforming the real estate industry</li>
<li>You&#39;ll have autonomy to design and implement innovative technical solutions</li>
<li>Your work will directly impact thousands of apartment communities and millions of renters</li>
<li>You&#39;ll receive significant equity in a venture-backed company with strong traction</li>
<li>As we scale, your role and influence will grow with the company</li>
</ul>
<p><strong>Why You Might Want to Think Twice</strong></p>
<ul>
<li>This is a demanding role that will often require extended hours and deep commitment</li>
<li>As a founding team member, you&#39;ll need to wear multiple hats and step outside your comfort zone</li>
<li>You&#39;ll need to make thoughtful tradeoffs between innovation and immediate needs</li>
<li>You&#39;ll interact directly with customers to understand their needs and occasionally travel to their offices</li>
<li>We&#39;re a startup - priorities can shift rapidly as we respond to market opportunities and customer needs</li>
<li>If you&#39;re not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn&#39;t the job for you</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation</li>
<li>Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands</li>
<li>Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products</li>
<li>Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability</li>
<li>Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform</li>
<li>Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions</li>
<li>Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</li>
</ul>
<p><strong>Your Experience Looks Like</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field</li>
<li>5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability</li>
<li>Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services</li>
<li>Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)</li>
<li>Hands-on experience with database design, performance tuning, and scaling high-throughput data systems</li>
<li>Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices</li>
<li>Strong communication skills and ability to work effectively in a distributed, fast-paced environment</li>
<li>Comfortable operating in early-stage, high-ownership environments with evolving requirements</li>
<li>Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure</li>
<li>Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</li>
</ul>
<p><strong>Guiding Principles</strong></p>
<ul>
<li>Customer‑First Outcomes</li>
</ul>
<p>Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.</p>
<ul>
<li>Bias for Simplicity</li>
</ul>
<p>We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.</p>
<ul>
<li>Quality Is a Gate, Not an After‑Thought</li>
</ul>
<p>Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.</p>
<ul>
<li>Data‑Driven Choices</li>
</ul>
<p>We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.</p>
<ul>
<li>Transparency &amp; Written Culture</li>
</ul>
<p>Good ideas don’t expire in Zoom. We operate in public i</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/800b8d69-b1e0-4524-a0a7-a5cec8b337b5</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>695657b2-bfc</externalid>
      <Title>Senior Software Engineer, Data Acquisition</Title>
      <Description><![CDATA[<p>We are seeking a senior engineer to join our Data Acquisition (DA) team. Engineers at Zus have the opportunity to collaborate with our founding product and engineering leaders to bring our vision to the nation’s healthcare entrepreneurs.</p>
<p>The engineer joining this team will help build tools that interact with external health data networks to collect information about our patients and load it into the Zus data stores at high volume, as well as services used by customers and internal stakeholders to request that data.</p>
<p>You will work on data pipelines that operate on large scale data using a variety of AWS services (Step Functions, Lambda, DynamoDB, S3, etc). You will also work on RESTful services that are used both internally and externally. Go is our language of choice, although we also have some components written in NodeJS.</p>
<p>The team is responsible for deploying, maintaining, and operating its pipelines and services. Our Zus engineering teams are all US-based, and we hire only in the US.</p>
<p>In Data Acquisition, we work across a collection of US timezones and also collaborate with our development partners in Central European Time.</p>
<p>Zus supports both remote work and hybrid work in the Boston area with an office near South Station, and our teams are a mix of both styles of work.</p>
<p>We actively work to make sure all voices are heard and information is shared regardless of your work location.</p>
<p><strong>You&#39;re a good fit because you...</strong></p>
<ul>
<li>Are scrappy and you move fast</li>
<li>Have experience with operationally stable and cost efficient data pipelines</li>
<li>Enjoy owning your work and seeing it deploy safely in production</li>
<li>Have experience building backend software in any language (we use mostly Go with a bit of Node)</li>
<li>Have some experience with at least one of the following: deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP)), and Infrastructure as Code (Terraform, CloudFormation, Chef)</li>
<li>Are excited to ~ finally! ~ enable a true digital revolution in healthcare</li>
<li>Thrive amid the changing landscape of a growing and evolving startup</li>
<li>Enjoy collaboration and solving unique problems</li>
<li>Are comfortable working remotely (EST/CST preferred as that is where our team is located) and are willing to travel for in person collaboration occasionally</li>
</ul>
<p><strong>It would be awesome if you were...</strong></p>
<ul>
<li>Experienced in building and running large-scale systems in the cloud</li>
<li>Experienced in building services and APIs used by third-party developers</li>
<li>Knowledgeable about application security</li>
<li>Experienced in working with healthcare data and APIs</li>
<li>Familiar with the FHIR and/or TEFCA standards</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>This role can be hybrid in Boston or mostly remote. We’re flexible, because we trust our people to do great work wherever they’re most productive. We’re proudly remote-first, but not strangers by any means. We get together a few times a year to build real rapport, align on strategy, and connect as people.</p>
<p>We believe strong culture is built on trust, transparency, and showing up online or or in person. So yes, work from where you thrive… and plan on the occasional gathering where the strategy is sharp, the conversations are candid, and the snacks are usually excellent.</p>
<p>We will offer you…</p>
<ul>
<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>
<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>
<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-180,000 per year</Salaryrange>
      <Skills>Go, NodeJS, AWS services (Step Functions, Lambda, DynamoDB, S3, etc), RESTful services, deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP), Infrastructure as Code (Terraform, CloudFormation, Chef), building and running large-scale systems in the cloud, building services and APIs used by third-party developers, application security, working with healthcare data and APIs, FHIR and/or TEFCA standards</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/775b2ba8-80ee-4d7b-8bfb-0bab2b094793</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c20d7221-4b5</externalid>
      <Title>Support Engineer</Title>
      <Description><![CDATA[<p>As a Support Engineer at Zuma, you&#39;ll be a bridge between our customers, engineering team, and product vision. You&#39;ll ensure new customers onboard smoothly, integrations run reliably, and support operations scale as we grow. This is a hands-on role for someone who loves problem-solving, can dive into APIs and databases, and takes pride in clear documentation and communication.</p>
<p>You&#39;ll help property managers succeed with our AI platform while also driving continuous improvements in our internal tools and processes.</p>
<p>Responsibilities:
Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation
Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands
Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products
Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability
Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform
Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions
Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</p>
<p>Your Experience Looks Like:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
3+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability
Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services
Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)
Hands-on experience with database design, performance tuning, and scaling high-throughput data systems
Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices
Strong communication skills and ability to work effectively in a distributed, fast-paced environment
Comfortable operating in early-stage, high-ownership environments with evolving requirements
Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure
Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</p>
<p>Guiding Principles:
Customer‑First Outcomes
Bias for Simplicity
Quality Is a Gate, Not an After‑Thought
Data‑Driven Choices
Transparency &amp; Written Culture</p>
<p>Other Benefits:
Great health insurance, dental, and vision
Gym and workspace stipends
Computer and workspace enhancements
Unlimited PTO
Opportunity to play a critical role in building the foundations of the company and Engineering culture</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management businesses across the US and Canada, a ~$200B market.</Employerdescription>
      <Employerwebsite>https://www.zuma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/da4d2130-954e-4b29-a9ef-3926b9bedba6</Applyto>
      <Location>US and Canada</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8ae5b8d5-5a4</externalid>
      <Title>Security Engineer</Title>
      <Description><![CDATA[<p>As a Security Engineer at Yuno, you will be responsible for embedding security by default across our development and operations workflows.</p>
<p>In this role, you will work closely with Engineering and DevOps teams to design, implement, and maintain secure cloud infrastructure, CI/CD pipelines, and containerized environments.</p>
<p>You will play a key role in strengthening our security posture across AWS and GCP, automating security controls through infrastructure as code, and ensuring compliance with industry standards such as PCI DSS and SOC 2, enabling Yuno to scale securely in the global payments ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Design, build, and maintain secure and scalable internal security solutions and tools using Python to support security operations and strengthen technical controls.</p>
</li>
<li><p>Improve and manage security configurations in AWS and GCP (including WAF, Security Hub, IAM policies, SIEM integrations and other critical services) to continuously strengthen our overall cloud security posture and ensure best practices are implemented.</p>
</li>
<li><p>Implement and maintain security processes and technical controls that support compliance requirements (e.g., PCI DSS, ISO 27001/27701, SOC 2).</p>
</li>
<li><p>Collaborate with different teams on cross-functional security initiatives, providing technical expertise and ensuring alignment with best practices.</p>
</li>
<li><p>Explore and evaluate emerging technologies and architectures (e.g., AI integrations) to ensure secure adoption.</p>
</li>
</ul>
<p>Skills You Need:</p>
<ul>
<li><p>4+ years of hands-on experience in security engineering or similar technical security roles.</p>
</li>
<li><p>Strong experience designing and developing security tools or internal products to support security operations using Python.</p>
</li>
<li><p>Solid knowledge of AWS and GCP security services and configurations.</p>
</li>
<li><p>Practical experience working with compliance frameworks (e.g., PCI DSS, ISO 27001/27701, SOC 2) in cloud environments.</p>
</li>
<li><p>Strong problem-solving skills and the ability to communicate and collaborate effectively with cross-functional teams.</p>
</li>
<li><p>Verbal and written English fluency.</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Familiarity with SIEM platforms and security monitoring tools.</p>
</li>
<li><p>Experience with Kubernetes and container security.</p>
</li>
<li><p>Experience with infrastructure as code (e.g., Terraform, CloudFormation).</p>
</li>
<li><p>Familiarity with emerging architectures (e.g., serverless, event-driven, AI integrations).</p>
</li>
<li><p>Experience embedding security practices across the software development lifecycle, including CI/CD pipelines.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, AWS, GCP, PCI DSS, SOC 2, ISO 27001/27701, Cloud security, Infrastructure as code, CI/CD pipelines, Container security, SIEM platforms, Kubernetes, Terraform, CloudFormation, Serverless, Event-driven, AI integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Yuno</Employername>
      <Employerlogo>https://logos.yubhub.co/yuno.com.png</Employerlogo>
      <Employerdescription>Yuno is building the payment infrastructure that allows all companies to participate in the global market.</Employerdescription>
      <Employerwebsite>https://www.yuno.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/yuno/f67be624-8969-4967-baec-1d924213a482</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7f43bb14-3c4</externalid>
      <Title>Senior Cloud Engineer</Title>
      <Description><![CDATA[<p>Shield AI is seeking a Senior Cloud Engineer to support its leadership in applied artificial intelligence development. In this role, you will be responsible for engineering, deploying, provisioning, and managing critical cloud systems that drive innovation across Shield AI&#39;s public and private cloud environments, both domestically and internationally.</p>
<p>As part of the Cloud and Infrastructure team within Enterprise Operations, you will play a key role in ensuring the performance, scalability, and reliability of these systems to support various business units. This position may involve occasional travel to Shield AI locations.</p>
<p><strong>Responsibilities:</strong></p>
<p><strong>Engineering:</strong></p>
<ul>
<li>Manage and optimize multi-cloud infrastructure (Azure, AWS) for performance, reliability, and scalability.</li>
<li>Support and optimize cloud and virtual machine environments, assisting with capacity planning, performance monitoring, security compliance, and vulnerability remediation.</li>
<li>Assist in implementing and maintaining infrastructure systems, including servers, storage, backup solutions, and disaster recovery processes, for both public and private clouds.</li>
<li>Continuously learn and adapt to emerging technologies and platforms, leveraging automation wherever possible.</li>
<li>Author and produce the necessary documentation for engineered and maintained systems along with associated processes that supporting teams can leverage.</li>
<li>Assist in researching, recommending, and developing innovative solutions for complex requirements and issue resolution.</li>
<li>Collaborate cross-functionally with AI, DevOps, and Security teams to ensure compliance, observability, and resilience in mission-critical environments.</li>
<li>Participate in Agile methodologies and sound engineering principles.</li>
</ul>
<p><strong>Operations and Support:</strong></p>
<ul>
<li>Perform daily system monitoring, verifying the integrity and availability of all server resources, systems and key processes, reviewing system and application logs.</li>
<li>Support system maintenance and upgrades, including OS patching, software configuration, hardware updates, and performance tuning to ensure optimal cloud infrastructure performance.</li>
<li>Provide escalated support for operational issues possibly during and after normal business hours for systems, workloads, and Kubernetes AI infrastructure.</li>
<li>Analyze, troubleshoot and resolve system infrastructure and software issues.</li>
<li>Ability to participate in on-call, emergency, or maintenance roles</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or related field, or equivalent experience (4+ years) plus an engineer level certification, Azure/AWS Associate, or another similar level certification.</li>
<li>4 years’ experience supporting applications and systems in a production environment in high-availability, mission-critical, or defense-grade environments preferred.</li>
<li>Comfortable with operational efficiencies utilizing Infrastructure as Code (IaC) solutions (e.g., Terraform, Ansible).</li>
<li>Strong understanding of networking concepts (VPCs, VPNs, subnets, routing, firewalls).</li>
<li>Experience in automating repetitive tasks using scripting languages such as PowerShell, Python, or Bash.</li>
<li>Experience with deployment and systems administration of at least one type of Linux distribution (i.e. RHEL, Ubuntu)</li>
<li>Experience with concepts of Microsoft Windows Server administration, Azure and Active Directory environments</li>
<li>Possesses organizational skills, with a process-oriented mindset, attention to detail, and effective verbal and written communication abilities.</li>
<li>Ability to work independently to accomplish assigned tasks.</li>
<li>Solution-oriented, constructive approach to problem-solving.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience deploying and maintaining workloads in Azure public cloud environments.</li>
<li>Hands-on experience with containerization and Kubernetes-based workloads.</li>
<li>Strong understanding of virtualization and private cloud platforms (e.g., VMware, Hyper-V, KVM).</li>
<li>Background in DevOps, Site Reliability Engineering (SRE), or cloud infrastructure roles.</li>
<li>Proficiency with configuration management and automation tools (e.g., Ansible, Chef, Puppet, Terraform).</li>
<li>Experience building and optimizing CI/CD pipelines.</li>
</ul>
<p><strong>Salary and Benefits:</strong></p>
<ul>
<li>$110,000 - $170,000 a year</li>
<li>Full-time regular employee offer package: Pay within range listed + Bonus + Benefits + Equity</li>
<li>Temporary employee offer package: Pay within range listed above + temporary benefits package (applicable after 60 days of employment)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $170,000 a year</Salaryrange>
      <Skills>Cloud Engineering, Multi-cloud infrastructure, Azure, AWS, Networking concepts, Infrastructure as Code, Scripting languages, Linux distribution, Microsoft Windows Server administration, Active Directory environments, Containerization, Kubernetes-based workloads, Virtualization, Private cloud platforms, DevOps, Site Reliability Engineering, Configuration management, Automation tools, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/702e2609-db48-49ab-8bec-d405c956a6ce</Applyto>
      <Location>San Diego, California / Dallas, Texas / San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c3536285-729</externalid>
      <Title>Senior Full-Stack Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Full-Stack Engineer to join our forward-deployed engineering team. You&#39;ll work directly with state governments and public sector partners, and enterprise clients to design, build, and deploy impactful identity solutions.</p>
<p>This role blends hands-on software development, technical consulting, and customer success: ideal for someone who thrives at the intersection of technology and mission-driven impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and deploy full-stack solutions for state governments and public sector partners.</li>
<li>Collaborate with customer delivery leads, engineers, and UX designers to ensure successful deployments.</li>
<li>Translate customer requirements into technical architectures and production-ready systems.</li>
<li>Serve as a trusted technical advisor for partners adopting open identity standards and privacy best practices.</li>
<li>Build backend services and full-stack web or mobile apps that meet public sector security, privacy, and accessibility standards.</li>
<li>Contribute to Rust codebases that run across backend, mobile, and browser environments.</li>
<li>Manage customer deployments and provide post-launch technical support.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>2+ years of experience building backend systems in statically typed languages (Rust, Go, C#, or Java).</li>
<li>Strong background in modern web frontends (React, TypeScript, or similar) with an eye for accessibility and security.</li>
<li>Proven ability to lead cross-functional engineering efforts and deliver production-grade systems.</li>
<li>Strong appreciation for open-source software, standards-based design, and community-driven development.</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and DevOps practices.</li>
<li>Excellent communication skills and comfort working directly with customers or stakeholders.</li>
<li>Based in the U.S., excited to collaborate with state government partners.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience with digital identity, cryptography, data privacy, or blockchain technologies (e.g., Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect).</li>
<li>Familiarity with PostgreSQL, GraphQL, or RESTful API design and development.</li>
<li>Understanding of CI/CD pipelines, infrastructure as code, and automation using Terraform, or similar tools.</li>
<li>Exposure to mobile app development (React Native, Flutter, or similar frameworks).</li>
<li>Experience in security engineering, access control, federated identity, or PKI systems.</li>
<li>Prior work in public sector, government technology, or other high-compliance environments.</li>
<li>Interest in usability, accessibility (WCAG, Section 508), and inclusive product design.</li>
<li>Contributions to open-source projects or participation in digital identity standards bodies (W3C, DIF, IETF) is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C#, Java, React, TypeScript, Cloud infrastructure, DevOps practices, PostgreSQL, GraphQL, RESTful API design, CI/CD pipelines, Infrastructure as code, Automation, Terraform, Mobile app development, Security engineering, Access control, Federated identity, PKI systems, Digital identity, Cryptography, Data privacy, Blockchain technologies, Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect, Usability, Accessibility, Inclusive product design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>SpruceID</Employername>
      <Employerlogo>https://logos.yubhub.co/spruceid.com.png</Employerlogo>
      <Employerdescription>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions for governments and enterprises.</Employerdescription>
      <Employerwebsite>https://spruceid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/sprucesystems/b6ed1d39-d3e4-454f-8d8c-a5a65d64651f</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5f40194b-3c0</externalid>
      <Title>Product Manager, Forge</Title>
      <Description><![CDATA[<p>We are seeking a talented and experienced product manager to define and execute the strategy for Forge, our product that enables customers to build, fine-tune and deploy custom AI models at scale.</p>
<p>Forge turns cutting-edge research into enterprise-ready capabilities by powering model fine-tuning, reinforcement learning and post-training workflows. By working at the intersection of research and product it provides customers with the tools to train specialized models that deliver real-world business value.</p>
<p>As the PM leading Forge you will shape a 0-1 product with significant business impact and the potential to grow offering while defining how organizations train and deploy the next generation of AI models.</p>
<p>Key Responsibilities:</p>
<p>Define the Future • Set the vision: Shape and evangelize a compelling product strategy for Forge ensuring alignment with company goals and market opportunities.</p>
<p>Spot the gaps: Lead market and UX research to uncover unmet needs, competitive whitespaces, and emerging trends in SOTA AI post-training capabilities.</p>
<p>Build &amp; Ship • Own the lifecycle: Drive end-to-end product development, from ideation to launch and iteration,balancing speed, quality, and user delight.</p>
<p>Champion the user: Partner with design and research to craft intuitive, high-impact experiences, using data and feedback to refine continuously.</p>
<p>Scale, Execute, &amp; Enable • Go-to-market: Collaborate with marketing and sales to launch products successfully, including pricing, positioning, and adoption strategies.</p>
<p>Align stakeholders: Rally engineering, design, and business teams around priorities, trade-offs, and timelines.</p>
<p>Prioritize ruthlessly: Maintain a dynamic roadmap that delivers quick wins while advancing long-term bets.</p>
<p>Requirements:</p>
<p>Product Management Experience 5+ years of relevant experience in new, competitive, fast-paced and ambiguous environments with a track record of building and scaling complex AI/ML or infrastructure solutions.</p>
<p>Technical skills - Very good understanding of training pipelines, RL loops, and model deployment architectures,</p>
<p>Expertise in AI model lifecycle management, including fine-tuning, evaluation, and serving.</p>
<p>Experience with Infrastructure as Code (IaC), containerization, and scalable deployment modes (e.g., on-prem, VPC, cloud).</p>
<p>Familiarity with Kubernetes/Slurm is a strong plus.</p>
<p>User obsession Relentless focus on solving real user problems, backed by data and qualitative insights.</p>
<p>Cross-functional influence Proven ability to align and inspire engineering, design, and go-to-market teams without direct authority.</p>
<p>Problem-solving Balance big-picture thinking with hands-on problem-solving , you’re equally comfortable crafting a roadmap, diving into metrics and running technical tests.</p>
<p>Communication: Crisp, persuasive storytelling for executives, teams, and users , ability to distill complex technical concepts (e.g., RL, LoRA, SFT) into clear narratives for docs, decks, and workshops.</p>
<p>Adaptability: Thrive in high-velocity, dynamic settings where priorities shift quickly.</p>
<p>Collaboration: Low ego + high EQ , you build trust and drive decisions through clarity, not hierarchy.</p>
<p>Autonomy: Self-directed with a bias for action, you own outcomes end-to-end.</p>
<p>Preferred Qualifications:</p>
<p>Infrastructure knowledge - Strong knowledge of model training, model architectures, etc.</p>
<p>Strong understanding how complex architectures are designed and impact of deployment modes</p>
<p>Proficient coding skills are strongly recommended</p>
<p>Kubernetes know-how strongly recommended</p>
<p>Growth mindset: Deep familiarity with product-led growth strategies (e.g., viral loops, onboarding optimization, monetization, etc.).</p>
<p>Builder’s mindset: Founder or early-stage PM experience , you’ve turned 0 → 1 ideas into products users love.</p>
<p>Technical depth: Ability to prototype, hack, or dive into code when needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>training pipelines, RL loops, model deployment architectures, AI model lifecycle management, fine-tuning, evaluation, serving, Infrastructure as Code (IaC), containerization, scalable deployment modes, Kubernetes/Slurm, model training, model architectures, complex architectures, deployment modes, proficient coding skills, Kubernetes know-how, product-led growth strategies, viral loops, onboarding optimization, monetization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that designs and develops high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/11087966-f183-44b1-adc9-3a400c1f52ad</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>08f992cf-0e9</externalid>
      <Title>CyberSecurity Team Lead, Infrastructure and Application</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>Mistral AI is a technology company that develops and provides AI-powered solutions and platforms for enterprise use. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>As a CyberSecurity Team Lead, you will be responsible for architecting and enforcing the security posture of our entire technical stack, from on-premise foundations to cloud-native deployments. You will oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</p>
<p>Responsibilities</p>
<ul>
<li>Oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</li>
<li>Select, deploy, and maintain the tools needed for visibility and protection, including CNAPP, CSPM, SAST/DAST, secret scanning, and SBOM/CVE tracking.</li>
<li>Integrate security controls and automated gates directly into CI/CD pipelines to catch vulnerabilities before deployment (Shift Left).</li>
<li>Partner with engineering teams to interpret findings and &#39;ease the fix,&#39; providing patches, code snippets, or architectural advice to resolve issues quickly.</li>
<li>Define and maintain rigorous security guidelines and best practices for developers and system administrators.</li>
<li>Design and lead security awareness programs and technical training tailored for developers and admins to reduce human risk.</li>
<li>Track and define key security metrics (MTTR, coverage, vulnerability density) to visualize posture and progress to leadership.</li>
</ul>
<p>Requirements</p>
<ul>
<li>6+ years of experience in Information Security, with a specific focus on Application Security, Cloud Security, or DevSecOps.</li>
<li>Strong scripting skills (Python, Go, or Bash) to automate security tasks and integrate tools.</li>
<li>Deep understanding of CI/CD ecosystems and container orchestration (Kubernetes/Docker).</li>
<li>Hands-on experience with modern security tooling (e.g., Wiz, Snyk, SonarQube, Prisma, or similar enterprise tools).</li>
<li>Collaborative mindset: you view developers as partners, not adversaries, and focus on enabling them to code securely.</li>
<li>Clear communication, autonomous, and capable of translating technical security risks into actionable engineering tasks.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive health insurance</li>
<li>Flexible working hours</li>
<li>Professional development opportunities</li>
</ul>
<p>Note: The company may offer additional benefits not listed here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Application Security, Cloud Security, DevSecOps, CI/CD ecosystems, Container orchestration, Modern security tooling, Scripting skills, Collaborative mindset, Clear communication, Industry certifications, Infrastructure as Code, Offensive security, Prior experience securing large-scale AI or Machine Learning infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides AI-powered solutions and platforms for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c9b75928-dd48-4432-b6f1-fc0b24e51657</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5bb951eb-f98</externalid>
      <Title>Applied AI Engineer, Senior/Staff Devops/SRE - EMEA</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany and Singapore. We offer a comprehensive AI platform that meets enterprise needs, whether on-premises or in cloud environments.</p>
<p>Our offerings include le Chat, the AI assistant for life and work.</p>
<p>About The Job</p>
<p>Mistral AI is seeking an Applied AI Engineer focused on DevOps to facilitate the adoption of its products among customers and collaborate with them to address complex technical challenges.</p>
<p>In this role, you’ll apply your problem-solving ability, creativity, and technical skills to help organizations use AI to drive real impact in the world.</p>
<p>Responsibilities</p>
<p>• Onboard customers on our products, providing guidance on deployment and integration, and ensuring the best production setup from the low-level GPU stack up to infrastructure, back-end and front-end interfaces.</p>
<p>• Work on deploying state-of-the-art AI applications from consumer products to industrial use cases, driving with our customers a crucial technological transformation.</p>
<p>• Collaborate with our researchers, other AI engineers, and product engineers on our most complex customer projects involving deployment, scaling, and contributing to our open-source codebases for tasks such as inference and fine-tuning.</p>
<p>• Be involved in pre-sales calls to understand potential clients&#39; needs, challenges, and aspirations. You will provide technical guidance on our products and explain Mistral technologies to various stakeholders.</p>
<p>About You</p>
<p>• You are fluent in English.</p>
<p>• You hold a Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</p>
<p>• You have 2+ years of experience in a DevOps or Site Reliability Engineering role.</p>
<p>• You&#39;re experienced with deploying and managing AI-based products in production environments.</p>
<p>• You are fluent in Python.</p>
<p>• You have experience with containerization technologies such as Docker and Kubernetes.</p>
<p>• You have experience with CI/CD pipelines and automated deployment tools.</p>
<p>• You have deep understanding of cloud platforms (AWS, Azure, GCP) and on-premises infrastructure.</p>
<p>• You are experienced with infrastructure as code (IaC) tools such as Terraform or Ansible.</p>
<p>Benefits</p>
<p>We have local offices in Paris, London, Marseille and Singapore.</p>
<p>France</p>
<p>• Competitive cash salary and equity</p>
<p>• Food: Daily lunch vouchers</p>
<p>• Sport: Monthly contribution to a Gympass subscription</p>
<p>• Transportation: Monthly contribution to a mobility pass</p>
<p>• Health: Full health insurance for you and your family</p>
<p>• Parental: Generous parental leave policy</p>
<p>UK</p>
<p>• Competitive cash salary and equity</p>
<p>• Insurance</p>
<p>• Transportation: Reimburse office parking charges, or 90GBP/month for public transport</p>
<p>• Sport: 90GBP/month reimbursement for gym membership</p>
<p>• Meal voucher: £200 monthly allowance for its meals</p>
<p>• Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Docker, Kubernetes, CI/CD pipelines, Automated deployment tools, Cloud platforms (AWS, Azure, GCP), On-premises infrastructure, Infrastructure as code (IaC) tools (Terraform or Ansible)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI provides high-performance, optimized, open-source and cutting-edge AI models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/3e51d533-1f2d-48e3-9a2b-33fc7e8b0c0c</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>58df2f04-af4</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Data Engineer to join our Data Platform team to partner with our product and business stakeholders across risk, operations, and other domains. As a Data Engineer, you will be responsible for building robust data pipelines and engineering foundations by ingesting data from disparate sources, ensuring data quality and consistency, and enabling better business decisions through reliable data infrastructure across core product areas.</p>
<p>Your primary focus will be on building scalable data pipelines using Airflow to orchestrate data workflows that ingest, transform, and deliver data from various sources into Snowflake and Databricks. You will also design and implement data models in Snowflake that support analytics, reporting, and ML use cases with a focus on performance, reliability, and scalability.</p>
<p>In addition, you will develop infrastructure as code using Terraform to automate and manage cloud resources in AWS, ensuring consistent and reproducible deployments. You will monitor data pipeline health and implement data quality checks to ensure accuracy, completeness, and timeliness of data as business needs evolve.</p>
<p>You will also optimize data processing workflows to improve performance, reduce costs, and handle growing data volumes efficiently. Troubleshooting and resolving data pipeline issues, working through ambiguity to get to the root cause and implementing long-term fixes will be a key part of your role.</p>
<p>As a Data Engineer, you will bridge gaps between data and the business by working with cross-functional teams across the US and India office to understand requirements and translate them into robust technical solutions. You will create comprehensive documentation on data pipelines, data models, and infrastructure, keeping documentation up to date and facilitating knowledge transfer across the team.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>2+ years of data engineering experience with strong technical skills and the ability to architect scalable data solutions.</li>
</ul>
<ul>
<li>Hands-on experience with Python for data processing, automation, and building data pipelines.</li>
</ul>
<ul>
<li>Proficiency with workflow orchestration tools, preferably Airflow, including DAG development, task dependencies, and monitoring.</li>
</ul>
<ul>
<li>Strong SQL skills and experience with cloud data warehouses like Snowflake, including performance optimization and data modeling.</li>
</ul>
<ul>
<li>Experience with cloud platforms, preferably AWS (S3, Lambda, EC2, IAM, etc.), and understanding of cloud-based data architectures.</li>
</ul>
<ul>
<li>Experience working cross-functionally with data analysts, analytics engineers, data scientists, and business stakeholders to understand requirements and deliver solutions.</li>
</ul>
<ul>
<li>An ownership mentality – this engineer will be responsible for the reliability and performance of their data pipelines and expected to fully understand data flows, dependencies, and their implications on downstream users.</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience with dbt for transformation logic and analytics engineering workflows integrated with data pipelines.</li>
</ul>
<ul>
<li>Familiarity with Databricks for large-scale data processing, including Spark optimization and Delta Lake.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and data infrastructure.</li>
</ul>
<ul>
<li>Knowledge of data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, slowly changing dimensions).</li>
</ul>
<ul>
<li>Experience with CI/CD practices for data pipelines and automated testing frameworks.</li>
</ul>
<ul>
<li>Experience with streaming data and real-time processing frameworks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Python, SQL, Snowflake, Databricks, AWS, Terraform, data engineering, data pipelines, data modeling, dbt, Infrastructure as Code, CI/CD, streaming data, real-time processing</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company that provides a banking app for families, serving over 6 million parents and kids.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/e98d9733-8b8c-4ce4-997d-6cf14e35b2f3</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5242ca9a-088</externalid>
      <Title>Staff Automation Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Automation Engineer to have a huge impact on the Business Systems, Security, Production Engineering and IT functions. This role is for a seasoned engineer who thrives on solving complex operational challenges, enhancing system security and stability, and improving efficiency through automation and best practices using AI technologies.</p>
<p>Your day-to-day will involve implementing Agentic AI and LLM-powered workflows using tools like Tines, AWS Agentcore, AWS Bedrock, Claude Code, etc. You will deploy systems with Infrastructure as Code (IaC) (i.e. Terraform) and build and maintain automation workflows across key enterprise platforms (i.e. Atlassian, Okta, Google Workspace, Slack, Zoom, knowledge management systems), cybersecurity systems (i.e. SIEM, GRC platforms, Data Security Platforms, etc.), and cloud environments (AWS, GCP).</p>
<p>You will build AI-driven chatbots or intelligent agents that automate tasks, support conversational workflows, and integrate with enterprise applications. You will partner with IT, Security, GRC, Procurement, and business teams to automate operational tasks and processes to reduce toil, improve efficiency and enable business.</p>
<p>You will develop integrations using REST APIs, JSON, webhooks, and scripting languages (JavaScript, Python). You will follow established automation and AI standards for quality, security, and governance; provide improvements where appropriate.</p>
<p>You will troubleshoot, maintain, and optimize existing workflows to improve stability and performance. You will document designs, workflows, configurations, and operational procedures.</p>
<p>You will participate in code reviews, technical discussions, and team-based learning to uplift engineering quality and consistency.</p>
<p>You will work with various tooling in Security, IT, and Production Engineering.</p>
<p>This role requires 10+ years of experience in automation engineering, systems integration, or workflow development. You should have experience with automation platforms such as Tines, Retool, Superblocks, n8n, etc. You should also have hands-on experience with Terraform and containerization technologies.</p>
<p>You should have experience developing LLM-powered automations, conversational interfaces, or Agentic AI assistants. You should have knowledge of Git and modern version control practices.</p>
<p>You should have strong skills in REST APIs, JSON, webhooks, JavaScript, and Python. You should also have familiarity with identity systems (Okta, SCIM) and RBAC concepts.</p>
<p>You should have familiarity with cloud environments such as Google Cloud Platform (GCP) and Amazon Web Services (AWS).</p>
<p>You should be able to break down problems, collaborate cross-functionally, and deliver solutions with moderate guidance.</p>
<p>You should have strong communication skills and the ability to translate functional requirements into technical outputs.</p>
<p>Preferred experience includes familiarity with data platform and database technologies (e.g., Snowflake, PostgreSQL, Cassandra, DynamoDB).</p>
<p>Work perks at Greenlight include medical, dental, vision, and HSA match, paid life insurance, AD&amp;D, and disability benefits, traditional 401k with company match, unlimited PTO, paid company holidays and pop-up bonus holidays, professional development stipends, mental health resources, 1:1 financial planners, fertility healthcare, 100% paid parental and caregiving leave, plus cleaning service and meals during your leave, flexible WFH, both remote and in-office opportunities, fully stocked kitchen, catered lunches, and occasional in-office happy hours, employee resource groups.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000</Salaryrange>
      <Skills>Agentic AI, LLM-powered workflows, Tines, AWS Agentcore, AWS Bedrock, Claude Code, Infrastructure as Code (IaC), Terraform, REST APIs, JSON, webhooks, JavaScript, Python, Git, modern version control practices, identity systems, RBAC concepts, cloud environments, Google Cloud Platform (GCP), Amazon Web Services (AWS), data platform and database technologies, Snowflake, PostgreSQL, Cassandra, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company providing a banking app for families. They serve over 6 million parents and kids.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/d85a9c34-4434-4f6d-8f01-bccb9521c036</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0ad8413d-ec3</externalid>
      <Title>Senior Backend Engineer</Title>
      <Description><![CDATA[<p>This role is ideal for engineers who thrive on complex distributed systems and have deep experience with backend APIs, relational databases, and event-driven architectures.</p>
<p>You will build high-performance, reliable solutions across cloud-native platforms and global infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Identify, design, and develop foundational backend services that power Fal&#39;s commerce platform</li>
<li>Partner with product teams to understand functional requirements and deliver solutions that meet business needs</li>
<li>Write clear, well-tested, and maintainable software and IaC for both new and existing systems</li>
<li>Analyze and improve the robustness and scalability of existing distributed systems, APIs, databases, and infrastructure</li>
<li>Conduct design and code reviews, create developer documentation, and develop testing strategies for robustness and fault tolerance</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of demonstrated experience in building large scale, fault tolerant, distributed systems and API microservices</li>
<li>Expert-level programmer in one or more of Python, Go, Or Rust</li>
<li>Experience designing, analyzing and improving efficiency, scalability, and stability of various system resources</li>
<li>Proficiency in writing and maintaining Infrastructure as Code (IaC)</li>
<li>Proficiency in version control practices and integrating IaC with CI/CD pipelines</li>
<li>Experience with payment processors (e.g. Stripe) and billing systems a plus</li>
<li>Experience with Kubernetes, or containers a plus</li>
<li>Experience building and operating data infrastructure (Kinesis, Airflow, Kafka, etc) a plus</li>
</ul>
<p><strong>What we offer at Fal</strong></p>
<ul>
<li>Interesting and challenging work</li>
<li>Competitive salary and equity</li>
<li>A lot of learning and growth opportunities</li>
<li>We offer visa sponsorship and will help you relocate to San Francisco</li>
<li>Health, dental, and vision insurance (US)</li>
<li>Regular team events and offsite</li>
</ul>
<p><strong>Compensation</strong></p>
<p>$180,000 - $250,000 + equity + comprehensive benefits package</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $250,000</Salaryrange>
      <Skills>Python, Go, Rust, Infrastructure as Code (IaC), Version control practices, CI/CD pipelines, Payment processors, Billing systems, Kubernetes, Containers, Data infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal is a fast-scaling, commerce-driven company.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4009193009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8e582153-6af</externalid>
      <Title>Senior DevOps Lead - Cloud &amp; Autonomous System</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>
<p>We are a small company with under 100 employees, operating with the energy of a startup. However, we&#39;re also publicly traded, which means our employees get access to the liquidity of our publicly-traded equity.</p>
<p>As a Senior DevOps Lead at Cyngn, you will play a vital role in architecting and managing infrastructure across cloud and autonomous vehicle systems. This position combines traditional cloud DevOps leadership with specialized expertise in robotics and autonomous systems infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and architect cloud and vehicle infrastructure initiatives across AWS and ROS/Linux environments</li>
<li>Design and implement scalable solutions for both cloud services and autonomous vehicle systems</li>
<li>Establish and maintain DevOps best practices, CI/CD pipelines, and infrastructure as code</li>
<li>Drive observability, monitoring, and incident response strategies</li>
<li>Optimize performance and cost efficiency of cloud and edge computing resources</li>
<li>Mentor team members and foster a developer-friendly environment</li>
<li>Manage on-call rotations and incident response processes</li>
<li>Architect solutions for processing and storing large-scale vehicle telemetry data</li>
<li>Lead security initiatives and compliance efforts across infrastructure</li>
</ul>
<p>Requirements</p>
<ul>
<li>10+ years of relevant DevOps/Infrastructure experience</li>
<li>Proven track record as a technical lead in platform or infrastructure teams</li>
<li>Advanced expertise in AWS services, infrastructure as code (Terraform), and Kubernetes</li>
<li>Strong experience with service mesh (Istio) and Helm/Kustomize</li>
<li>Deep understanding of ROS/ROS2 and Linux kernel configurations</li>
<li>Experience with GPU configurations and ML infrastructure</li>
<li>Expertise in ARM and NVIDIA CUDA platform configurations</li>
<li>Strong programming skills in Python and shell scripting</li>
<li>Experience with infrastructure automation (Ansible)</li>
<li>Expertise in CI/CD tools (Jenkins, GitHub Actions)</li>
<li>Strong system architecture and design skills</li>
<li>Excellence in technical documentation</li>
<li>Outstanding problem-solving abilities</li>
<li>Strong leadership and mentoring capabilities</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with autonomous vehicle systems</li>
<li>Track record of optimizing GPU-based ML infrastructure</li>
<li>Experience with large-scale IoT deployments</li>
<li>Contributions to open-source projects</li>
<li>Experience with real-time systems and low-latency requirements</li>
<li>Expertise in security implementations including SSO, IdP, and AWS Cognito</li>
<li>Experience with JFrog artifactory and container registry management</li>
<li>Proficiency in AWS IoT Greengrass</li>
<li>Experience with container resource management on edge devices</li>
<li>Understanding of CPU affinity and priority scheduling</li>
<li>Track record of implementing cost optimization strategies</li>
<li>Experience with scaling systems both horizontally and vertically</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</li>
<li>Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)</li>
<li>Company 401(k)</li>
<li>Commuter Benefits</li>
<li>Flexible vacation policy</li>
<li>Sabbatical leave opportunity after five years with the company</li>
<li>Paid Parental Leave</li>
<li>Daily lunches for in-office employees</li>
<li>Monthly meal and tech allowances for remote employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,000-225,000 per year</Salaryrange>
      <Skills>AWS services, infrastructure as code (Terraform), Kubernetes, service mesh (Istio), Helm/Kustomize, ROS/ROS2, Linux kernel configurations, GPU configurations, ML infrastructure, ARM, NVIDIA CUDA platform configurations, Python, shell scripting, infrastructure automation (Ansible), CI/CD tools (Jenkins, GitHub Actions), system architecture and design skills, technical documentation, problem-solving abilities, leadership and mentoring capabilities, autonomous vehicle systems, optimizing GPU-based ML infrastructure, large-scale IoT deployments, open-source projects, real-time systems and low-latency requirements, security implementations including SSO, IdP, and AWS Cognito, JFrog artifactory and container registry management, AWS IoT Greengrass, container resource management on edge devices, CPU affinity and priority scheduling, cost optimization strategies, scaling systems both horizontally and vertically</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/1c31b7d8-cf85-472f-9358-1e10189cf815</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c1dcea75-d5a</externalid>
      <Title>Member of Technical Staff - Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced engineer to join our team in Freiburg, Germany or San Francisco, USA. As a Member of Technical Staff - Infrastructure Engineer, you will be responsible for maintaining and scaling our research infrastructure, ensuring health and optimizing components to extract peak performance from the system. You will also collaborate with research teams to deeply understand their infrastructure needs and design solutions that balance performance with cost efficiency.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Maintaining research infrastructure, ensuring health, and optimizing components to extract peak performance from the system (both on application and infrastructure side)</li>
<li>Scaling infrastructure to meet growing research demands while maintaining reliability and performance</li>
<li>Collaborating with research teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency</li>
<li>Identifying and resolving performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale</li>
<li>Building and evolving telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets</li>
<li>Participating in on-call rotations and incident response to maintain system reliability</li>
</ul>
<p>Technical focus includes:</p>
<ul>
<li>Python, Bash, Go</li>
<li>Kubernetes</li>
<li>Nvidia GPU drivers and operators</li>
<li>OTel, Prometheus</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Experience building or operating large-scale training platforms</li>
<li>Worked with large-scale compute clusters (GPUs)</li>
<li>Proven ability to debug performance and reliability issues across large distributed fleets</li>
<li>Strong problem-solving skills and ability to work independently</li>
<li>Strong communication skills and the ability to work effectively with both internal and external partners</li>
<li>Deep knowledge of modern cloud infrastructure including Kubernetes, Infrastructure as Code, AWS, and GCP</li>
<li>Experience with SLURM</li>
</ul>
<p>We offer a competitive base annual salary of $180,000-$300,000 USD and a hybrid work model with a meaningful in-person presence.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$300,000 USD</Salaryrange>
      <Skills>Python, Bash, Go, Kubernetes, Nvidia GPU drivers, Nvidia GPU operators, OTel, Prometheus, Experience building or operating large-scale training platforms, Worked with large-scale compute clusters (GPUs), Proven ability to debug performance and reliability issues across large distributed fleets, Strong problem-solving skills and ability to work independently, Strong communication skills and the ability to work effectively with both internal and external partners, Deep knowledge of modern cloud infrastructure including Kubernetes, Infrastructure as Code, AWS, and GCP, Experience with SLURM</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Black Forest Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/blackforestlabs.com.png</Employerlogo>
      <Employerdescription>Black Forest Labs develops foundational technologies for image and video creation, including Latent Diffusion, Stable Diffusion, and FLUX.</Employerdescription>
      <Employerwebsite>https://www.blackforestlabs.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/blackforestlabs/jobs/4925659008</Applyto>
      <Location>Freiburg (Germany), San Francisco (USA)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>be821069-a7f</externalid>
      <Title>Asset Data Engineer</Title>
      <Description><![CDATA[<p>Join the Asset Data team and build the streaming data infrastructure that powers Anchorage&#39;s digital asset platform. You&#39;ll design systems that ingest real-time blockchain and market data from diverse providers, transforming raw feeds into certified, trusted data products.</p>
<p>We&#39;re creating contract-governed supply chains that let us onboard new assets and providers quickly while maintaining the low-latency, high-availability SLOs our business depends on.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build streaming data pipelines for blockchain data (onchain transactions, staking rewards, validator info) and market data (prices, trades, order books)</li>
<li>Design and implement data contracts and validation gates that enforce quality and schema compliance at ingestion points</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Collaborate on designing the architecture for standardized ingestion patterns that enable rapid onboarding of new blockchains and market data feeds</li>
<li>Establish redundancy and failover patterns to meet Tier 1 availability and freshness SLOs for critical data products</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Collaborate with Protocols, Trading, and Custody teams to understand their data needs and design certified data products with clear SLAs</li>
<li>Partner with Data Platform team on orchestration, storage patterns (BigLake), and metadata management (Atlan)</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Advocate for contract-governed data supply chains and help establish engineering standards for producer patterns across the org</li>
<li>Contribute to architectural decisions and help mature the team&#39;s practices around observability, testing, and operational excellence</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5-7+ years building streaming or high-throughput data systems: You have experience designing and operating production data pipelines that handle large volumes with low latency and high reliability</li>
<li>Solid backend engineering skills: You&#39;re proficient in Go or Python and have built services that interact with streaming infrastructure (Kafka, pub/sub, websockets, REST APIs)</li>
<li>Blockchain data familiarity: You understand blockchain concepts and are comfortable working with on-chain data (transactions, events, staking, validators) across multiple chains with different data models</li>
<li>Data engineering adjacent skills: You&#39;re comfortable with data transformation patterns, schema evolution, and working with cloud data warehouses (BigQuery) and storage systems (GCS, BigLake)</li>
<li>Operational mindset: You have experience deploying and operating services on cloud platforms (preferably GCP), with strong practices around monitoring, alerting, and incident response</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Staking data expertise: You&#39;ve worked with staking rewards, validator data, or proof-of-stake blockchain infrastructure</li>
<li>Market data systems: You&#39;ve built systems that ingest and process market data (prices, trades, order books) from exchanges or data vendors</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Kafka, pub/sub, websockets, REST APIs, blockchain data, data transformation patterns, schema evolution, cloud data warehouses, storage systems, stake data expertise, market data systems, infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/82139746-fb0e-44b9-bbb6-ae078e5d251a</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cbc0884f-89f</externalid>
      <Title>Sr. Staff Engineer (Cloud, Python, Go, LLM)</Title>
      <Description><![CDATA[<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>Join us to transform the future through continuous technological innovation. You are a visionary engineer with a passion for leveraging advanced technologies to solve complex challenges. You thrive in dynamic environments, consistently pushing boundaries to drive innovation. With over eight years of experience in distributed systems, enterprise software, and microservices, you possess deep technical expertise and a strong foundation in Python, Go, and modern cloud platforms.</p>
<p>Your knowledge of Kubernetes, containerization, and hybrid cloud architectures is complemented by a robust understanding of Linux systems and automation tools. You are skilled at collaborating across globally distributed teams, bringing clarity to technical discussions and architectural designs. You are self-driven, continuously seeking to learn and experiment with emerging technologies,including Generative AI and LLMs.</p>
<p>Your communication skills enable you to articulate ideas clearly and influence stakeholders, whether they are internal R&amp;D teams or external customers. You are motivated by opportunities to democratize AI, streamline development processes, and empower others with innovative solutions. Your curiosity and resilience drive you to prototype, test, and refine new concepts, ensuring Synopsys remains at the forefront of the industry.</p>
<p>Above all, you value inclusivity, teamwork, and the pursuit of excellence.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain scalable cloud services for R&amp;D teams to host Generative AI applications on leading cloud platforms.</li>
</ul>
<ul>
<li>Build and deliver cloud-native, containerized AI systems for on-premises customers, ensuring seamless integration and deployment.</li>
</ul>
<ul>
<li>Lead orchestration of GPU scheduling within Kubernetes ecosystems, utilizing tools like Nvidia GPU Operator and Multi-Instance GPU (MIG).</li>
</ul>
<ul>
<li>Architect reliable and cost-effective hybrid cloud solutions using cutting-edge technologies such as Docker, Kubernetes Cluster Federation, and Azure Arc.</li>
</ul>
<ul>
<li>Streamline onboarding processes for internal products and external customers, creating assets and artifacts that facilitate access to GenAI technologies.</li>
</ul>
<ul>
<li>Collaborate with external customers to understand their environments, constraints, and architectures, defining and integrating tailored platforms and products.</li>
</ul>
<ul>
<li>Prototype, experiment, and test newer technologies,including Generative AI, LLMs, and inference servers,to drive innovation within Synopsys.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>BS/MS in Computer Science, Software Engineering, or equivalent.</li>
</ul>
<ul>
<li>8+ years of experience in distributed systems, enterprise software, and microservices.</li>
</ul>
<ul>
<li>Expert proficiency in Python and Go programming languages.</li>
</ul>
<ul>
<li>Deep understanding of Kubernetes (on-premises and managed services like AKS/EKS/GKE).</li>
</ul>
<ul>
<li>Strong systems knowledge,Linux kernel, cgroups, namespaces, and Docker.</li>
</ul>
<ul>
<li>Experience with CI/CD automation, Infrastructure as Code (IaC), and cloud providers (AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Ability to design complex distributed systems and solve challenging problems efficiently.</li>
</ul>
<ul>
<li>Experience with RDBMS (PostgreSQL preferred) for handling large data sets.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<ul>
<li>Self-motivated with a continuous learning mindset.</li>
</ul>
<ul>
<li>Experience working with globally distributed teams.</li>
</ul>
<ul>
<li>Nice to have: Experience with Generative AI, LLMs, inference servers, and prototyping new technologies.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>Innovative problem-solver who thrives in ambiguity and complexity.</li>
</ul>
<ul>
<li>Collaborative team player, comfortable working with global and cross-functional teams.</li>
</ul>
<ul>
<li>Clear and effective communicator, able to articulate technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Resilient and adaptable, eager to learn and experiment with new technologies.</li>
</ul>
<ul>
<li>Inclusive and empathetic, valuing diverse perspectives and backgrounds.</li>
</ul>
<ul>
<li>Driven by curiosity, continuous improvement, and the pursuit of excellence.</li>
</ul>
<p><strong>The Team You’ll Be A Part Of</strong></p>
<p>You’ll join the Synopsys Platform Engineering team,an innovative, globally distributed group dedicated to transforming R&amp;D product development and deployment. Our team is passionate about leveraging cloud, containerization, and AI technologies to streamline workflows and accelerate innovation. We work collaboratively, experiment boldly, and support each other in delivering high-impact solutions that shape the future of electronic design automation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Kubernetes, containerization, hybrid cloud architectures, Linux systems, automation tools, CI/CD automation, Infrastructure as Code (IaC), cloud providers (AWS/GCP/Azure), RDBMS (PostgreSQL), Generative AI, LLMs, inference servers, prototyping new technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys develops and maintains software used in chip design, verification, and manufacturing.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/hyderabad/sr-staff-engineer-cloud-python-go-llm/44408/92664451936</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>a560bd4c-a1a</externalid>
      <Title>Cloud Security Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Cloud Security Engineer to join our team. As a Cloud Security Engineer at Starling, you&#39;ll be building and supporting tooling and infrastructure that spans across AWS and GCP supporting our internal operations and interfacing with other teams to deliver the services that support our business.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Engineer Secure Foundations: You will lead the design and implementation of critical security services, with a heavy focus on building robust Identity and Access Management (IAM) systems and automated, API-driven certificate management workflows.</li>
<li>Security-as-Code &amp; Scalability: Leveraging a software-first philosophy, you will develop and maintain high-quality, scalable security tooling and middleware within ECS and Kubernetes environments, ensuring security logic is integrated directly into the deployment pipeline.</li>
<li>Collaborative Code Ownership: You will serve as a technical authority in cross-functional code reviews, acting as an engineering peer who helps teams bake security into their services from the first line of code to the final pull request.</li>
<li>Proactive System Hardening: You will stay ahead of the evolving threat landscape by treating security as a continuous engineering challenge,proactively identifying vulnerabilities and architecting technical solutions to fortify our global ecosystem.</li>
</ul>
<p>Professional Requirements:</p>
<ul>
<li>Demonstrated ability to architect secure, distributed systems with a focus on programmatic IAM and automated, API-driven PKI management.</li>
<li>Extensive experience with Infrastructure as Code (IaC) in Terraform and a deep commitment to writing clean, maintainable, and production-grade code,ideally in Golang.</li>
<li>A test-first mentality toward security, with experience building unit and integration tests into CI/CD pipelines to ensure that security guardrails are as reliable as the features they protect.</li>
<li>A strong conceptual grasp of cryptographic primitives and hands-on experience securing containerized workloads and service meshes within ECS and Kubernetes.</li>
<li>A track record of taking end-to-end ownership of complex technical projects, from initial design docs and RFCs through to deployment and observability.</li>
<li>A belief that if it isn&#39;t tested, it&#39;s broken, and a drive to proactively identify and fix vulnerabilities by treating security as a continuous engineering challenge.</li>
</ul>
<p>Our Team Philosophy:
The Security Engineering team is a diverse and dynamic group passionate about building secure and resilient systems. We&#39;re enthusiastic about security, but we&#39;re not about rigid, one-size-fits-all controls. We believe in striking a balance between protecting our systems and empowering our developers to build and innovate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud Security, AWS, GCP, Identity and Access Management, API-driven Certificate Management, Infrastructure as Code, Terraform, Golang, Cryptographic Primitives, Containerized Workloads, Service Meshes</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a fully licensed UK bank with over 3,000 employees across four offices.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3B7E26FC24</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>abdcbaeb-1c5</externalid>
      <Title>Applied AI Engineer, Senior/Staff Devops/SRE - EMEA</Title>
      <Description><![CDATA[<p><strong>About Mistral AI</strong></p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity.</p>
<p>Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions.</p>
<p>Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Our offerings include le Chat, the AI assistant for life and work.</p>
<p>We are seeking an Applied AI Engineer focused on DevOps to facilitate the adoption of its products among customers and collaborate with them to address complex technical challenges.</p>
<p>Applied AI Engineers, ML Infra at Mistral AI work directly with customers to quickly understand their greatest challenges and design and implement AI solutions.</p>
<p>In this role, you’ll apply your problem-solving ability, creativity, and technical skills to help organizations use AI to drive real impact in the world.</p>
<p><strong>Responsibilities</strong></p>
<p>• Onboard customers on our products, providing guidance on deployment and integration, and ensuring the best production setup from the low-level GPU stack up to infrastructure, back-end and front-end interfaces.
• Work on deploying state-of-the-art AI applications from consumer products to industrial use cases, driving with our customers a crucial technological transformation.
• Collaborate with our researchers, other AI engineers, and product engineers on our most complex customer projects involving deployment, scaling, and contributing to our open-source codebases for tasks such as inference and fine-tuning.
• Be involved in pre-sales calls to understand potential clients&#39; needs, challenges, and aspirations. Provide technical guidance on our products and explain Mistral technologies to various stakeholders.</p>
<p><strong>About You</strong></p>
<p>• You are fluent in English.
• You hold a Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.
• You have 2+ years of experience in a DevOps or Site Reliability Engineering role.
• You&#39;re experienced with deploying and managing AI-based products in production environments.
• You are fluent in Python.
• You have experience with containerization technologies such as Docker and Kubernetes.
• You have experience with CI/CD pipelines and automated deployment tools.
• You have deep understanding of cloud platforms (AWS, Azure, GCP) and on-premises infrastructure.
• You are experienced with infrastructure as code (IaC) tools such as Terraform or Ansible.
• You hold strong communication skills with an ability to explain complex technical concepts in simple terms to technical and non-technical audiences.</p>
<p>Ideally You Have:
• Experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect, or Technical Product Manager.
• Familiarity with AI frameworks such as PyTorch or TensorFlow.
• Contributions to open-source projects, particularly in the space of DevOps or AI.</p>
<p><strong>Benefits</strong></p>
<p>We have local offices in Paris, London, Marseille and Singapore.</p>
<p>France
• Competitive cash salary and equity
• Food: Daily lunch vouchers
• Sport: Monthly contribution to a Gympass subscription
• Transportation: Monthly contribution to a mobility pass
• Health: Full health insurance for you and your family
• Parental: Generous parental leave policy</p>
<p>UK
• Competitive cash salary and equity
• Insurance
• Transportation: Reimburse office parking charges, or 90GBP/month for public transport
• Sport: 90GBP/month reimbursement for gym membership
• Meal voucher: £200 monthly allowance for its meals
• Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Docker, Kubernetes, CI/CD pipelines, Automated deployment tools, Cloud platforms (AWS, Azure, GCP), On-premises infrastructure, Infrastructure as code (IaC) tools (Terraform or Ansible)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI solutions for various industries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/3e51d533-1f2d-48e3-9a2b-33fc7e8b0c0c</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>7bce292a-74f</externalid>
      <Title>CyberSecurity Team Lead, Infrastructure and Application</Title>
      <Description><![CDATA[<p>Role summary</p>
<p>Embedded directly within Mistral&#39;s Security Engineering ecosystem, you will architect and enforce the security posture of our entire technical stack, from on-premise foundations to cloud-native deployments.</p>
<p>As a CyberSecurity Team Lead, you will oversee the identification, prioritization, and remediation of vulnerabilities across both On-Prem and Cloud infrastructures as well as internal applications.</p>
<p>You will select, deploy, and maintain the tools needed for visibility and protection, including CNAPP, CSPM, SAST/DAST, secret scanning, and SBOM/CVE tracking.</p>
<p>Integrate security controls and automated gates directly into CI/CD pipelines to catch vulnerabilities before deployment (Shift Left).</p>
<p>Partner with engineering teams to interpret findings and &#39;ease the fix,&#39; providing patches, code snippets, or architectural advice to resolve issues quickly.</p>
<p>Define and maintain rigorous security guidelines and best practices for developers and system administrators.</p>
<p>Design and lead security awareness programs and technical training tailored for developers and admins to reduce human risk.</p>
<p>Track and define key security metrics (MTTR, coverage, vulnerability density) to visualize posture and progress to leadership.</p>
<p>Who you are</p>
<p>• 6+ years of experience in Information Security, with a specific focus on Application Security, Cloud Security, or DevSecOps.</p>
<p>• Strong scripting skills (Python, Go, or Bash) to automate security tasks and integrate tools.</p>
<p>• Deep understanding of CI/CD ecosystems and container orchestration (Kubernetes/Docker).</p>
<p>• Hands-on experience with modern security tooling (e.g., Wiz, Snyk, SonarQube, Prisma, or similar enterprise tools).</p>
<p>• Collaborative mindset: you view developers as partners, not adversaries, and focus on enabling them to code securely.</p>
<p>• Clear communication, autonomous, and capable of translating technical security risks into actionable engineering tasks.</p>
<p>It would be ideal if you also have:</p>
<p>• Industry certifications such as CISSP, CCSP, OSCP, or cloud-specific security certifications.</p>
<p>• Strong Infrastructure as Code (IaC) experience with Terraform or Ansible.</p>
<p>• Experience in offensive security (Penetration Testing) to better understand attacker mindsets.</p>
<p>• Prior experience securing large-scale AI or Machine Learning infrastructure.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>hybrid</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Application Security, Cloud Security, DevSecOps, CI/CD, Container Orchestration, Modern Security Tooling, Scripting Skills, Infrastructure as Code, Industry Certifications, Infrastructure as Code, Offensive Security, Large-Scale AI or Machine Learning Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c9b75928-dd48-4432-b6f1-fc0b24e51657</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>787e5e21-119</externalid>
      <Title>Applied AI, Senior MLOps Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>About The Job</p>
<p>Mistral AI is seeking an Applied AI Engineer focused on DevOps to facilitate the adoption of its products among customers and collaborate with them to address complex technical challenges. Applied AI Engineers, ML Infra at Mistral AI work directly with customers to quickly understand their greatest challenges and design and implement AI solutions.</p>
<p>Responsibilities</p>
<ul>
<li>Onboard customers on our products, providing guidance on deployment and integration, and ensuring the best production setup from the low-level GPU stack up to infrastructure, back-end and front-end interfaces.</li>
<li>Work on deploying state-of-the-art AI applications from consumer products to industrial use cases, driving with our customers a crucial technological transformation.</li>
<li>Collaborate with our researchers, other AI engineers, and product engineers on our most complex customer projects involving deployment, scaling, and contributing to our open-source codebases for tasks such as inference and fine-tuning.</li>
<li>Involved in pre-sales calls to understand potential clients&#39; needs, challenges, and aspirations. Provide technical guidance on our products and explain Mistral technologies to various stakeholders.</li>
</ul>
<p>About You</p>
<ul>
<li>Fluent in English</li>
<li>Hold a Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field</li>
<li>5+ years of experience in a DevOps or Site Reliability Engineering role</li>
<li>Experienced with deploying and managing AI-based products in production environments</li>
<li>Fluent in Python</li>
<li>Experience with containerization technologies such as Docker and Kubernetes</li>
<li>Experience with CI/CD pipelines and automated deployment tools</li>
<li>Deep understanding of cloud platforms (AWS, Azure, GCP) and on-premises infrastructure</li>
<li>Experienced with infrastructure as code (IaC) tools such as Terraform or Ansible</li>
<li>Strong communication skills with an ability to explain complex technical concepts in simple terms to technical and non-technical audiences</li>
</ul>
<p>Ideally You Have</p>
<ul>
<li>Experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect, or Technical Product Manager</li>
<li>Familiarity with AI frameworks such as PyTorch or TensorFlow</li>
<li>Contributions to open-source projects, particularly in the space of DevOps or AI</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Docker, Kubernetes, CI/CD pipelines, Automated deployment tools, Cloud platforms (AWS, Azure, GCP), On-premises infrastructure, Infrastructure as code (IaC) tools (Terraform or Ansible), PyTorch, TensorFlow, Open-source projects (DevOps or AI)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI-powered products and solutions for various industries. It has a distributed workforce across multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/fb15ec7f-d9e9-4246-9d36-486d46c289e4</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>c0d52342-3a4</externalid>
      <Title>Senior/Lead Backend Engineer</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>We&#39;re seeking a highly skilled Senior/Lead Backend Engineer to join our team and build the core digital infrastructure for a next-generation energy company. You will work closely with product to engineer backend infrastructure that is second-to-none.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Construct a real-time digital twin of our renewable generation and customer demand</li>
<li>Develop messaging interfaces with our third-party providers</li>
<li>Build high-volume pipelines for processing customer energy consumption</li>
<li>Build the backend of a world-class energy app</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Demonstrable excellence as a Software Engineer</li>
<li>Ideally 2+ years experience</li>
<li>Proven ability to define and build complex systems</li>
<li>Hands-on experience shipping highly-available production systems</li>
<li>Clear communication</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with infrastructure as code (e.g. AWS CDK)</li>
<li>Experience in the Energy sector</li>
<li>Experience shipping consumer-facing products</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office-based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineer, AWS CDK, Energy sector, Consumer-facing products, Infrastructure as code, Complex systems, Highly-available production systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup aiming to deliver a terawatt of renewable energy.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/p9s5HbFQDiFtEEbrWJ3FHG/hybrid-senior%2Flead-backend-engineer-in-dubai-at-fuse-energy</Applyto>
      <Location>Dubai</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ffc8f1e0-500</externalid>
      <Title>FBS Information Security Analyst (SSPM experience)</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
</ul>
<ul>
<li>A dynamic, diverse, and multicultural work environment</li>
</ul>
<ul>
<li>Leaders with deep market knowledge and strategic vision</li>
</ul>
<ul>
<li>Continuous learning and development</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Key member of the Platform Security team and will help drive security for our Software-as-a-Service (SaaS) platforms.</li>
</ul>
<ul>
<li>This role focuses on ensuring SaaS applications are configured securely, users have appropriate access, and that third-party integrations don&#39;t introduce vulnerabilities.</li>
</ul>
<ul>
<li>This role will also ensure compliance with relevant policies and regulations.</li>
</ul>
<ul>
<li>Support onboarding of discovered SaaS platforms to SaaS Security Posture Management (SSPM) tool</li>
</ul>
<ul>
<li>Monitor SSPM tool to review and triage alerts for onboarded applications</li>
</ul>
<ul>
<li>Review the security of 3rd party integrations to prevent data breaches and other security risks</li>
</ul>
<ul>
<li>Tune alerting rules to minimize false positives</li>
</ul>
<ul>
<li>Ensure required access controls are in place for users and service accounts</li>
</ul>
<ul>
<li>Assist with reporting to ensure compliance with policies and regulatory requirements</li>
</ul>
<ul>
<li>Collaborate with application owners and IT teams to ensure security best practices are followed</li>
</ul>
<ul>
<li>Review infrastructure as code changes to ensure resources are provisioned using principles of least privilege and meet security best practices</li>
</ul>
<p><strong>Benefits:</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
</ul>
<ul>
<li>Comprehensive benefits package.</li>
</ul>
<ul>
<li>Flexible work arrangements (remote and/or office-based).</li>
</ul>
<ul>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
</ul>
<ul>
<li>Private Health Insurance.</li>
</ul>
<ul>
<li>Paid Time Off.</li>
</ul>
<ul>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SSPM, SaaS, security, compliance, infrastructure as code, least privilege, security best practices</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global professional services company that provides consulting, technology, and outsourcing services to businesses. It has a workforce of nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uiFzPRmdBAZBFtUvRjAssY/remote-fbs-information-security-analyst-(sspm-experience)-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9148d21c-e5f</externalid>
      <Title>Senior/Lead Backend Engineer</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>We&#39;re looking for a Senior/Lead Backend Engineer to join our team at Fuse Energy. As a key member of our product team, you will work closely with us to build the core digital infrastructure for a next-generation energy company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Construct a real-time digital twin of our renewable generation and customer demand</li>
<li>Develop messaging interfaces with our third-party providers</li>
<li>Build high-volume pipelines for processing customer energy consumption</li>
<li>Build the backend of a world-class energy app</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Demonstrable excellence as a Software Engineer</li>
<li>Ideally 2+ years experience</li>
<li>Proven ability to define and build complex systems</li>
<li>Hands-on experience shipping highly-available production systems</li>
<li>Clear communication</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with infrastructure as code (e.g. AWS CDK)</li>
<li>Experience in the Energy sector</li>
<li>Experience shipping consumer-facing products</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office-based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineer, AWS CDK, Energy sector, Consumer-facing products, Infrastructure as code, Highly-available production systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup with a mission to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/gr5gkzpErS17pXVNJuoxDZ/hybrid-senior%2Flead-backend-engineer-in-london-at-fuse-energy</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b55675c9-0db</externalid>
      <Title>Head of Engineering (Platform)</Title>
      <Description><![CDATA[<p><strong>Head of Engineering (Platform)</strong></p>
<p>Fuse Energy is seeking a Head of Engineering (Platform) to lead the development of our core backend systems and platform infrastructure. As a key member of our team, you will own the architecture and scalability of the platform, ensuring we build robust, high-performance systems that enable rapid product iteration and exceptional customer experiences.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the backend platform architecture, infrastructure, and foundational services</li>
<li>Drive the evolution of our platform to support scale, performance, and reliability</li>
<li>Build a real-time digital twin of renewable generation and customer demand</li>
<li>Design and manage high-volume data pipelines for energy consumption and system telemetry</li>
<li>Lead the development of integration layers and messaging interfaces with third-party services</li>
<li>Establish engineering best practices for observability, CI/CD, testing, and scalability</li>
<li>Partner closely with product and backend teams to support rapid development cycles</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Proven track record as a senior software engineer or tech lead, ideally with platform/backend focus</li>
<li>5+ years experience in software engineering, with 2+ years in a leadership role</li>
<li>Experience building and operating production-grade systems at scale</li>
<li>Strong understanding of system design, distributed computing, and cloud infrastructure</li>
<li>Clear and proactive communication, with the ability to align cross-functional teams</li>
<li>Hands-on approach to solving problems and making strategic decisions</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with Infrastructure as Code (e.g., AWS CDK, Terraform)</li>
<li>Experience with event-driven architecture, messaging queues, or stream processing</li>
<li>Familiarity with building internal platforms or developer tooling</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend platform architecture, infrastructure as code, event-driven architecture, messaging queues, stream processing, system design, distributed computing, cloud infrastructure, AWS CDK, Terraform, CI/CD, testing, scalability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup that aims to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dSZh2emP6XmnvYfQnTTL5q/hybrid-head-of-engineering-(platform)-in-london-at-fuse-energy</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>d0fbf43c-a77</externalid>
      <Title>Director, Cloud Automation Engineer</Title>
      <Description><![CDATA[<p>About this role</p>
<p>BlackRock&#39;s purpose is to help more and more people experience financial well-being. As a fiduciary to investors and a leading provider of financial technology, our clients turn to us for the solutions they need when planning for their most important goals.</p>
<p>This is a senior individual contributor engineering role leading the design, development, and implementation of advanced cloud automation solutions. You&#39;ll own the execution and delivery of large-scale projects while collaborating across multiple teams in addition to being responsible for hands-on keyboard execution of project components. Example projects include migration of existing on-prem systems to cloud, migration of existing cloud systems to alternate/new cloud(s), integration of acquired systems into our unified environment, and deployment of net-new cloud systems. This role offers high executive visibility, as you will influence strategic decisions and present progress and outcomes to senior leadership.</p>
<p>This role sits within the Aladdin Platform Hosting Services team, which is responsible for building and managing the infrastructure hosting platform upon which the Aladdin system runs. Our team provides reusable infrastructure services and components that allow developers to leverage cloud capabilities in a simple, cloud-agnostic, and scalable manner.</p>
<p>Key Responsibilities</p>
<ul>
<li>Architect and implement secure, scalable, and automated cloud infrastructure solutions across multi-cloud environments (AWS, Azure, GCP) tailored for financial workloads.</li>
<li>Lead automation initiatives using Infrastructure as Code (IaC) tools such as Terraform, Ansible, and CloudFormation to support mission-critical financial applications.</li>
<li>Develop CI/CD pipelines for cloud deployments and application delivery with strict adherence to financial compliance and audit requirements.</li>
<li>Champion an automation-first mindset by identifying repetitive tasks and implementing automation solutions—even for processes that initially appear as one-offs.</li>
<li>Leverage AI tools and frameworks to enhance efficiency, optimize workflows, and enable the broader engineering team to adopt AI-driven solutions.</li>
<li>Collaborate with risk, compliance, and security teams to ensure all automation processes meet regulatory standards (e.g., SOX, PCI-DSS, FFIEC).</li>
<li>Adopt a product-centric approach, treating internal platforms and automation frameworks as products with clear ownership, lifecycle management, and continuous improvement.</li>
<li>Own execution and delivery of large-scale projects, balancing hands-on technical work with cross-functional collaboration across engineering, operations, and governance teams.</li>
<li>Provide executive-level updates, influencing strategic decisions and ensuring alignment with organizational priorities.</li>
<li>Evaluate emerging technologies for automation, scalability, and reliability in financial contexts, including cost optimization and resiliency planning.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>10+ years of experience in technology systems development or management, with at least 5+ years focused on cloud automation and infrastructure engineering.</li>
<li>3+ years expertise in Infrastructure as Code (IaC) tools such as Terraform, Ansible, or similar.</li>
<li>Strong experience with cloud platforms (AWS, Azure, GCP) and hybrid environments in regulated industries.</li>
<li>Proficiency in scripting and programming languages (Python, PowerShell, Bash).</li>
<li>3+ years hands-on experience with CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc), containerization (Docker, Kubernetes), and orchestration frameworks.</li>
<li>Deep understanding of networking, security, and compliance in cloud environments, including encryption, identity management, and audit logging.</li>
<li>Excellent leadership, communication, and problem-solving skills.</li>
<li>Experience in contributing to Agile teams so that everyone achieves their goals</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Advanced certifications such as AWS Certified Solutions Architect – Professional, Azure Solutions Architect Expert, or Google Professional Cloud Architect.</li>
<li>Experience with financial compliance frameworks (SOX, PCI-DSS, FFIEC) and automated security controls.</li>
<li>Background in DevSecOps, automated governance, and AI-driven automation strategies.</li>
<li>Background in Kubernetes (k8s) system management</li>
<li>Experience with “next-gen” IaC tools such as Crossplane, Radius, Pulumi, env0, spacelift, etc.</li>
</ul>
<p>You have:</p>
<ul>
<li><p>Automation-First Attitude: Ability to identify repetitive tasks and implement automation solutions proactively, even for processes that initially appear as one-offs.</p>
</li>
<li><p>AI Proficiency: Skilled in leveraging AI tools to improve efficiency and enable team adoption of AI-driven workflows.</p>
</li>
<li><p>Product View: Treats internal platforms and automation frameworks as products, ensuring clear ownership, lifecycle management, and continuous improvement.</p>
</li>
<li><p>Execution &amp; Leadership: Capable of delivering large-scale projects through hands-on technical work while collaborating effectively across multiple teams.</p>
</li>
<li><p>Executive Communication: Comfortable presenting technical strategies and outcomes to senior leadership and influencing organizational priorities.</p>
</li>
<li><p>Motivated: You enjoy rolling up your sleeves and getting your hands dirty</p>
</li>
<li><p>Why Join Us?</p>
</li>
<li><p>Opportunity to lead strategic cloud automation initiatives for the Aladdin platform in a highly regulated financial environment.</p>
</li>
<li><p>Work with cutting-edge technologies and shape the future of cloud engineering in finance.</p>
</li>
<li><p>Collaborative, innovative environment with career growth opportunities and executive exposure.</p>
</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p>This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>
<p>For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock |</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud automation, infrastructure engineering, Infrastructure as Code (IaC), Terraform, Ansible, CloudFormation, CI/CD pipelines, Azure DevOps, GitHub Actions, Jenkins, containerization, Docker, Kubernetes, orchestration frameworks, Python, PowerShell, Bash, networking, security, compliance, encryption, identity management, audit logging</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that manages approximately $11 trillion in assets on behalf of investors worldwide.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/iWS9DZix7JsvYkHdrkTdwP/director%2C-cloud-automation-engineer-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3b55924b-e56</externalid>
      <Title>VP, Software Engineer (Fullstack - .Net &amp; React)</Title>
      <Description><![CDATA[<p>About this role</p>
<p>The Document Intelligence Platform is BlackRock&#39;s solution for document processing, content management, and data distribution within alternative investments—serving portfolio managers, fund analysts, and investment professionals across the firm.</p>
<p>As a Vice President, you will be a senior technical leader responsible for architecture, delivery, and operational excellence across the full stack. You will lead a team of software engineers, drive technical strategy, collaborate with senior stakeholders, and ensure our platform scales to meet evolving business needs.</p>
<p>Key Responsibilities</p>
<p>Technical Leadership &amp; Architecture (35%)</p>
<ul>
<li><p>Define and evolve the technical architecture across frontend and backend systems</p>
</li>
<li><p>Lead strategic direction of the document processing engine, content management services, and data distribution layer</p>
</li>
<li><p>Champion engineering best practices: code quality, testing, security, and observability</p>
</li>
<li><p>Evaluate new technologies and provide guidance on complex system design decisions</p>
</li>
<li><p>Drive technical innovation while maintaining production stability</p>
</li>
</ul>
<p>Team Leadership &amp; Development (30%)</p>
<ul>
<li><p>Build, lead, and mentor a high-performing team of 3-8 software engineers</p>
</li>
<li><p>Set team goals, conduct performance reviews, and drive career development</p>
</li>
<li><p>Foster a culture of technical excellence and continuous improvement</p>
</li>
<li><p>Recruit top engineering talent and develop team competencies</p>
</li>
<li><p>Champion agile practices and efficient delivery processes</p>
</li>
</ul>
<p>Delivery &amp; Execution (20%)</p>
<ul>
<li><p>Ensure timely, high-quality delivery of platform features</p>
</li>
<li><p>Oversee production operations, incident management, and system reliability (SLAs/SLOs)</p>
</li>
<li><p>Balance technical debt management with feature development</p>
</li>
<li><p>Drive automation of build, test, and deployment processes</p>
</li>
<li><p>Ensure compliance with security, regulatory, and audit requirements</p>
</li>
</ul>
<p>Stakeholder Management &amp; Communication (15%)</p>
<ul>
<li><p>Partner with Product Management to define technical roadmaps</p>
</li>
<li><p>Communicate technical strategy to senior leadership</p>
</li>
<li><p>Articulate technical concepts and trade-offs to non-technical audiences</p>
</li>
<li><p>Manage expectations and negotiate scope, timelines, and resources</p>
</li>
</ul>
<p>Technical Expertise</p>
<p>Backend Technologies (Expert Level)</p>
<ul>
<li><p>C# / .NET (.NET 6/7/8, ASP.NET Core)</p>
</li>
<li><p>SQL Server – database design, optimization, stored procedures, indexing strategies</p>
</li>
<li><p>Entity Framework Core – ORM patterns and performance optimization</p>
</li>
<li><p>RESTful API design and implementation</p>
</li>
<li><p>Microservices architecture – service decomposition, inter-service communication</p>
</li>
<li><p>Message queues and event-driven architecture (Azure Service Bus, Kafka)</p>
</li>
<li><p>Caching strategies (Redis, in-memory caching)</p>
</li>
</ul>
<p>Frontend Technologies (Strong Proficiency)</p>
<ul>
<li><p>React and TypeScript – modern frontend development</p>
</li>
<li><p>Redux and state management patterns</p>
</li>
<li><p>Single-spa or micro-frontend architectures</p>
</li>
<li><p>SCSS/CSS – responsive design and component styling</p>
</li>
</ul>
<p>Cloud &amp; DevOps</p>
<ul>
<li><p>Microsoft Azure services (extensive experience), including:</p>
<ul>
<li><p>Azure App Service, Azure Functions, Blob Storage</p>
</li>
<li><p>Azure SQL Database, Cosmos DB, Redis Cache</p>
</li>
<li><p>Application Insights, Key Vault, Azure DevOps</p>
</li>
</ul>
</li>
<li><p>Infrastructure as Code (Terraform, ARM templates, Bicep)</p>
</li>
<li><p>CI/CD pipeline design and implementation</p>
</li>
<li><p>Git workflows and branching strategies for large teams</p>
</li>
<li><p>Monitoring and observability (Application Insights, Grafana)</p>
</li>
<li><p>Performance engineering and scalability optimization</p>
</li>
</ul>
<p>Architecture &amp; Design</p>
<ul>
<li><p>Microservices architecture – service patterns, API gateways, service mesh</p>
</li>
<li><p>Clean / Hexagonal Architecture principles</p>
</li>
<li><p>Domain-Driven Design (DDD) concepts</p>
</li>
<li><p>API design (OpenAPI/Swagger, versioning strategies)</p>
</li>
<li><p>Authentication and authorization (OAuth 2.0, JWT, OIDC)</p>
</li>
<li><p>Security architecture and threat modeling</p>
</li>
<li><p>Data architecture and data modeling</p>
</li>
<li><p>High availability and disaster recovery design</p>
</li>
</ul>
<p>Experience &amp; Background</p>
<ul>
<li><p>8-12+ years of professional software engineering experience</p>
</li>
<li><p>5+ years in technical leadership roles</p>
</li>
<li><p>3+ years experience leading teams of 3+ engineers</p>
</li>
<li><p>Proven track record delivering large-scale, mission-critical systems</p>
</li>
<li><p>Deep expertise in full-stack development</p>
</li>
<li><p>Strong background in financial services technology preferred</p>
</li>
<li><p>Experience with cloud-native architectures on Azure</p>
</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p>This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, .NET, SQL Server, Entity Framework Core, RESTful API, Microservices architecture, Message queues, Event-driven architecture, Caching strategies, React, TypeScript, Redux, Single-spa, SCSS, CSS, Microsoft Azure services, Infrastructure as Code, CI/CD pipeline, Git workflows, Monitoring, Observability, Performance engineering, Scalability optimization, Cloud-native architectures, Azure, Financial services technology</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/3992RBEXGwvJj5gB7H9r9z/vp%2C-software-engineer-(fullstack---.net-%26amp%3B-react)-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>cefc0fee-e40</externalid>
      <Title>Associate, Full-Stack Software Engineer (Python &amp; React)</Title>
      <Description><![CDATA[<p>About this role</p>
<p>This role sits within Preqin, a part of BlackRock. Preqin plays a key role in how we are revolutionizing private markets data and technology for clients globally, complementing our existing Aladdin technology platform to deliver investment solutions for the whole portfolio.</p>
<p>As Associate, Engineer on the Orion data team, you will guide a high-performing engineering pod responsible for the technical and operational excellence of our data architecture. Your work ensures our data flows are robust, scalable, and aligned with business needs, forming the backbone of Orion’s data-driven products and insights. Your focus will be on delivering high-quality solutions for our stakeholders by leveraging strong data literacy, product awareness and clear communication. You will collaborate closely with product managers and data owner to ideate, design, and deliver new features, while actively shaping the direction of our technical strategy and data platform.</p>
<p>Key responsibilities will include:</p>
<ul>
<li><p>Develop workflows that seamlessly combine AI/ML with human expertise to accelerate data collection and improve decision-making processes.</p>
</li>
<li><p>Prioritize work based on data-driven insights and outcome-based goals in collaboration with stakeholders.</p>
</li>
<li><p>Design and implementation of scalable, reliable data pipelines that ingest, process, and deliver high quality data to downstream applications and analytics platforms.</p>
</li>
<li><p>Work closely with engineering teams across the business, ensuring the best technical solutions are adopted, and elevate development standards through knowledge sharing and best practices.</p>
</li>
<li><p>Collaborate across engineering, product, and data scientist teams to translate business requirements into technical solutions and ensuring our data assets are organized and accessible.</p>
</li>
<li><p>Mentor and guide team members, fostering a culture of continuous improvement, innovation, and open communication.</p>
</li>
<li><p>Actively participate in technical discussions about new product directions, data modelling, and architectural decisions, ensuring our technology platform remains extensible.</p>
</li>
<li><p>Lead an engineering pod using strong leadership and influence skills.</p>
</li>
<li><p>Manage of a team of junior and mid-level engineers, supporting their careers and growth.</p>
</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li><p>3+ years’ experience in software engineering.</p>
</li>
<li><p>Strong technical ability across the full stack: Python, FastAPI, React and Typescript is a plus.</p>
</li>
<li><p>Experience with databases like Postgres and Snowflake.</p>
</li>
<li><p>Experience of working within cloud provider services – Azure or AWS (preferred) and utilization of infrastructure as code.</p>
</li>
<li><p>A data-driven mindset to make development decisions based on robust analyses.</p>
</li>
<li><p>Ability to collaborate effectively with designers, engineering and data scientist teams to build our technical solutions.</p>
</li>
<li><p>You have driven technical solution design, taking the balance of engineering quality, testing, scalability and security into consideration.</p>
</li>
<li><p>A “let’s do it” and “challenge accepted” attitude when faced with less known or challenging tasks, with a willingness to learn new technologies and ways of working.</p>
</li>
<li><p>Excellent verbal and written communication and interpersonal skills, with the ability to influence at all organizational levels and bridge technical perspectives.</p>
</li>
<li><p>Proficiency in English required; additional languages and prior work experience at a global firm are desirable.</p>
</li>
<li><p>People management experience.</p>
</li>
<li><p>Experience with AI-related projects/products.</p>
</li>
<li><p>Knowledge of Infrastructure as Code (IaC) tools for provisioning cloud resources, CI/CD pipelines, and Cloud-Native distributed containerized microservice orchestration.</p>
</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, FastAPI, React, Typescript, Postgres, Snowflake, Azure, AWS, Infrastructure as Code, CI/CD pipelines, Cloud-Native distributed containerized microservice orchestration, AI/ML, Data science, Cloud computing, DevOps, Agile development</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/rF3NUqLpdmQwRkQEgaTaEb/associate%2C-full-stack-software-engineer-(python-%26amp%3B-react)-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>69369815-a11</externalid>
      <Title>Associate/Vice President, AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>At BlackRock, technology underpins everything we do. AI is a core strategic priority for the firm, embedded across Aladdin and our investment, client, and operational platforms. We are seeking an AI Infrastructure Engineer to help build and operate the foundational infrastructure that enables AI systems to scale safely, securely, and reliably across the enterprise.</p>
<p>This role sits within Aladdin Platform Engineering and focuses on the infrastructure and platform services required to support machine learning models, large language models (LLMs), and emerging AI capabilities in production. The successful candidate will work closely with AI Engineers, Data Scientists, Platform Engineers, Security, and Product partners to deliver resilient, cloud native AI platforms in a highly regulated environment.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, build, and operate AI-focused infrastructure platforms supporting model development, training, evaluation, and inference.</li>
<li>Engineer scalable, reliable, and secure cloud-native services to support AI workloads across AWS, Azure, and hybrid environments.</li>
<li>Partner with AI Engineering and Data Science teams to improve developer experience, performance, and operational stability of AI systems.</li>
<li>Enable production deployment of ML models and LLMs within governed enterprise environments, aligned with firmwide risk and compliance standards.</li>
<li>Implement and maintain infrastructure as code and automation to ensure repeatable, auditable platform provisioning.</li>
<li>Build and operate observability, monitoring, and alerting solutions for AI platforms, ensuring availability, performance, and cost transparency.</li>
<li>Collaborate with Security and Risk partners to integrate identity, access controls, data protection, and governance into AI infrastructure.</li>
<li>Contribute to architectural decisions and technical standards for AI platforms across Aladdin.</li>
<li>Participate in on-call rotations and operational support as required for critical platforms.</li>
<li>Continuously evaluate emerging AI infrastructure technologies and apply them pragmatically within BlackRock’s enterprise context.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Strong experience in cloud infrastructure, platform engineering, or systems engineering roles.</li>
<li>4+ hands-on expertise with AWS and/or Azure and/or GCP, including Azure ML, Azure Foundry, AWS Bedrock, Google Vertex, as well as cloud compute, networking, storage, and security services.</li>
<li>Understanding of ML platform operations and governance concepts, including model deployment strategies, lifecycle management, monitoring/observability, and Disaster Recovery</li>
<li>Experience supporting LLMs, generative AI platforms, or model serving infrastructure.</li>
<li>Experience supporting AI and machine learning workloads, with exposure to managed compute for model training and fine-tuning, experimentation over large datasets, and end-to-end MLOps pipeline flow including data ingestion, training, validation, and deployment.</li>
<li>Proficiency with Infrastructure as Code tools (e.g., Terraform, ARM/Bicep, CloudFormation).</li>
<li>Strong programming or scripting skills (e.g., Python, Bash, or similar).</li>
<li>Experience building and operating containerized and Kubernetes-based platforms.</li>
<li>Solid understanding of reliability, scalability, observability, and operational best practices.</li>
<li>Ability to work effectively in cross-functional teams and communicate complex technical concepts clearly.</li>
</ul>
<p><strong>Our Benefits</strong></p>
<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>
<p><strong>Our Hybrid Work Model</strong></p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, Azure, GCP, Cloud compute, Networking, Storage, Security services, ML platform operations, Governance concepts, Model deployment strategies, Lifecycle management, Monitoring/observability, Disaster Recovery, LLMs, Generative AI platforms, Model serving infrastructure, AI and machine learning workloads, Managed compute, Fine-tuning, Experimentation, End-to-end MLOps pipeline flow, Data ingestion, Training, Validation, Deployment, Infrastructure as Code, Terraform, ARM/Bicep, CloudFormation, Programming, Scripting, Containerized and Kubernetes-based platforms, Reliability, Scalability, Observability, Operational best practices, GPU or accelerator-based infrastructure, Financial services or highly regulated industries, Multicloud architectures and enterprise governance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2JsY2bUdeEEzUfhn796RPb/associate%2Fvice-president%2C-ai-infrastructure-engineer-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2305618f-5e7</externalid>
      <Title>Backend Engineer: Retail Media</Title>
      <Description><![CDATA[<p><strong>About the Job</strong></p>
<p>Constructor is seeking a Backend Engineer to join our Retail Media team. As a Backend Engineer, you will design, deliver, and maintain web services in close collaboration with other engineers.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Build, deploy, and support services using Python and FastAPI</li>
<li>Write AWS CloudFormation scripts, Jenkins jobs, and GitHub actions following best industry standards</li>
<li>Set up service observability, monitoring metrics, and alerting (Prometheus, Grafana, PagerDuty, AWS CloudWatch)</li>
<li>Implement CI/CD pipelines and separate stability testing</li>
<li>Collaborate with technical and non-technical business partners to develop and update functionalities</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong computer science background and familiarity with networking principles</li>
<li>Experience in designing, developing, and maintaining high-load real-time services</li>
<li>Proficiency in Infrastructure as Code (IaC) tools like CloudFormation or Terraform for managing cloud resources</li>
<li>Hands-on experience with setting up and improving CI/CD pipelines</li>
<li>Proficiency in Python</li>
<li>Experience in server-side coding for web services and a good understanding of API design principles</li>
<li>Skilled in setting up and managing observability tools like Prometheus, Grafana, and integrating alert systems like PagerDuty</li>
<li>Familiarity with Service-Oriented Architecture and knowledge of communication protocols like protobuf</li>
<li>Experience with NoSQL and relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL, ClickHouse/Athena)</li>
<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP</li>
<li>Experience collaborating in cross-functional teams</li>
<li>Excellent English communication skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time</li>
<li>Fully remote team</li>
<li>Work from home stipend</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity and paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k-$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options offered in addition to the base salary</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k-$120k USD</Salaryrange>
      <Skills>Python, FastAPI, AWS CloudFormation, Jenkins, GitHub, Prometheus, Grafana, PagerDuty, AWS CloudWatch, CI/CD pipelines, Infrastructure as Code, NoSQL databases, relational databases, distributed systems, caching solutions, protobuf, Service-Oriented Architecture, communication protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/5EBA554B5E</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2494c7ce-d01</externalid>
      <Title>MLOps: ML Recall</Title>
      <Description><![CDATA[<p><strong>About Us</strong></p>
<p>Constructor is a search and discovery platform for ecommerce, built to optimize refugee, conversion rate, and profit. Our search engine is entirely invented in-house utilizing transformers and generative LLMs.</p>
<p><strong>The Team</strong></p>
<p>The ML Recall team delivers measurable KPI improvements for our customers in search, driving better relevance and user satisfaction. We’re focused on building transparent, reproducible, and scalable data-intensive workflows.</p>
<p><strong>Challenges you’ll tackle</strong></p>
<ul>
<li>Build, deploy, and maintain our search services, including I/O-bound web services, CPU- and GPU-bound workloads, and data services</li>
<li>Develop using AWS CloudFormation, AWS CDK, Jenkins, and GitHub Actions</li>
<li>Optimize system performance, particularly for scaling large ML models efficiently</li>
<li>Maintain and enhance our observability stack, including tools like Prometheus, Grafana, PagerDuty, and AWS CloudWatch</li>
<li>Collaborate with both technical and non-technical stakeholders to design and evolve search functionality</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Excellent communicator with a passion for performance optimization</li>
<li>Excited to build scalable ML platforms and practical search systems</li>
<li>Strong proficiency in Python</li>
<li>Proven experience designing, developing, and maintaining high-load, distributed, real-time services</li>
<li>Demonstrated experience setting up and improving CI/CD pipelines</li>
<li>Hands-on experience with cloud platforms (AWS preferred) and Infrastructure as Code (e.g., Terraform, CloudFormation)</li>
<li>Proficiency with big data technologies across the end-to-end ML product lifecycle</li>
<li>Solid experience in server-side web service development and API design</li>
</ul>
<p><strong>What can help to stand out</strong></p>
<ul>
<li>Experience with Rust or another low-level programming language</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time</li>
<li>Fully remote team</li>
<li>Work from home stipend</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>Python, AWS CloudFormation, AWS CDK, Jenkins, GitHub Actions, Prometheus, Grafana, PagerDuty, AWS CloudWatch, Infrastructure as Code, Terraform, CloudFormation, Big data technologies, Server-side web service development, API design, Rust, Low-level programming language</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/2D42D22849</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>46a8c619-ec1</externalid>
      <Title>Backend Engineer: AI Shopping Agents</Title>
      <Description><![CDATA[<p><strong>About the Job</strong></p>
<p>Constructor is seeking a Backend Engineer to join its AI Shopping Agents team. The primary focus of this job is to design, deliver &amp; maintain web and data pipeline services in close collaboration with other engineers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, deploy, and support backend services</li>
<li>Define cloud infrastructure using AWS CloudFormation and maintain CI/CD pipelines with GitHub Actions</li>
<li>Improve and operate our observability stack</li>
<li>Collaborate with technical and non-technical stakeholders to design, develop, and refine features</li>
<li>Communicate effectively with stakeholders within and outside the team</li>
<li>Contribute to data processing pipelines and ETL processes</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong proficiency in Python and server-side web development (API design, concurrency/asynchronous programming)</li>
<li>Experience designing, building, and operating production backend services (performance, reliability, on-call/operations mindset)</li>
<li>Experience with Infrastructure as Code and cloud resource management (AWS preferred; Azure/GCP also fine)</li>
<li>Hands-on experience building or maintaining CI/CD pipelines</li>
<li>Experience with observability: metrics/logs/traces, dashboards, and alerting</li>
<li>Experience working with databases, including at least one relational and one NoSQL system (e.g., PostgreSQL, DynamoDB)</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Experience with high-load and/or real-time systems</li>
<li>Experience with distributed/service-oriented architectures, including interface definition and binary RPC (e.g., Protobuf/gRPC)</li>
<li>Familiarity with additional vector databases</li>
<li>Experience contributing to or owning ETL/data pipeline systems at scale</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Work with smart and empathetic people who will help you grow and make a meaningful impact.</li>
<li>Regular team offsite events to connect and collaborate.</li>
<li>Fully remote team - choose where you live.</li>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee, refreshed each year.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Base salary: $80k–$120K USD, depending on knowledge, skills, experience, and interview results.</li>
<li>Stock options - offered in addition to the base salary</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120K USD</Salaryrange>
      <Skills>Python, server-side web development, API design, concurrency/asynchronous programming, Infrastructure as Code, cloud resource management, CI/CD pipelines, observability, metrics/logs/traces, dashboards, alerting, databases, relational databases, NoSQL databases, high-load and/or real-time systems, distributed/service-oriented architectures, interface definition, binary RPC, vector databases, ETL/data pipeline systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a US-based company that has been in the market since 2019, building a next-generation platform for search and discovery in ecommerce. Its search engine is entirely invented in-house and powers over 1 billion queries every day across 150 languages and roughly 100 countries.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/9A5E2DE872</Applyto>
      <Location>Oregon, United States</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>282c4fb7-d6b</externalid>
      <Title>Senior Backend Engineer: Recommendations</Title>
      <Description><![CDATA[<p><strong>About the Job</strong></p>
<p>Constructor is seeking a Senior Backend Engineer to join our Recommendations team. As a key member of our engineering team, you will design, deliver, and maintain high-load real-time web services in close collaboration with other great engineers.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Build, deploy, and support robust recommendations services, including io-bound web services, cpu-bound services, and data services</li>
<li>Write AWS CloudFormation scripts, Jenkins jobs, and GitHub actions following best industry standards</li>
<li>Set up service observability, monitoring metrics, and alerting using Prometheus, Grafana, PagerDuty, and AWS CloudWatch</li>
<li>Implement CI/CD pipelines and separate stability testing for recommendations needs</li>
<li>Collaborate with technical and non-technical business partners to develop and update recommendations functionalities</li>
<li>Communicate with stakeholders within and outside the team</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong computer science background and familiarity with networking principles</li>
<li>Experience in designing, developing, and maintaining high-load real-time services</li>
<li>Proficiency in Infrastructure as Code (IaC) tools like CloudFormation or Terraform for managing cloud resources</li>
<li>Hands-on experience with setting up and improving CI/CD pipelines</li>
<li>Proficiency in a scripting language like Python and, as a plus, in compiled languages like Go or Rust</li>
<li>Experience in server-side coding for web services and a good understanding of API design principles</li>
<li>Skilled in setting up and managing observability tools like Prometheus, Grafana, and integrating alert systems like PagerDuty</li>
<li>Familiarity with Service-Oriented Architecture and knowledge of communication protocols like protobuf</li>
<li>Experience with NoSQL and relational databases, distributed systems, and caching solutions</li>
<li>Experience with any of the major public cloud formations: AWS, Azure, GCP</li>
<li>Experience collaborating in cross-functional teams</li>
<li>Excellent English communication skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time</li>
<li>Fully remote team</li>
<li>Work from home stipend</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity and paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options offered in addition to the base salary</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p><strong>Diversity, Equity, and Inclusion at Constructor</strong></p>
<p>At Constructor.io, we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status, or affiliation in any other protected group.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>computer science background, networking principles, Infrastructure as Code (IaC) tools, CloudFormation or Terraform, CI/CD pipelines, Python, Go or Rust, server-side coding for web services, API design principles, Prometheus, Grafana, PagerDuty, Service-Oriented Architecture, protobuf, NoSQL and relational databases, distributed systems, caching solutions, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a next-generation platform for search and discovery in ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/F0DCABC33E</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3f7b2fe6-074</externalid>
      <Title>DevOps Engineer - II</Title>
      <Description><![CDATA[<p>Job Title: DevOps Engineer - II</p>
<p>We are seeking an experienced DevOps Engineer to join our team. As a DevOps Engineer, you will play a pivotal role in ensuring the security, scalability, and reliability of our infrastructure and applications.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain secure CI/CD pipelines for automating deployment, configuration, and testing processes.</li>
<li>Own Helpshift production services and ensure complete monitoring coverage, troubleshoot and fix production issues.</li>
<li>Build a seamless zero-downtime process to upgrade our core infrastructure (ScyllaDB, Elasticsearch, Kafka, MongoDB, Redis).</li>
<li>Collaborate with development and operations teams to integrate security practices into the software development lifecycle.</li>
<li>Conduct regular security assessments, vulnerability scans, and penetration testing to identify and mitigate security risks.</li>
<li>Develop and maintain infrastructure as code (IaC) templates for provisioning and configuring cloud resources securely.</li>
<li>Monitor and respond to production incidents, including investigation, containment, and remediation activities.</li>
<li>Stay up-to-date with the latest security threats, vulnerabilities, and best practices, and make recommendations for continuous improvement.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Relevant experience of 5+ years and above.</li>
<li>In-depth knowledge of running/managing UNIX-like operating systems (we use Ubuntu).</li>
<li>Strong knowledge of networking protocols, security architectures, and identity and access management (IAM) principles.</li>
<li>Experience with containerisation technologies (e.g., Docker, Kubernetes) and securing containerised environments.</li>
<li>Experience in Designing and building solutions that are highly scalable, fault tolerant and cost-effective.</li>
<li>Experience of various FOSS tools for monitoring, graphing, capacity planning, and logging.</li>
<li>Experience with IaaC tools like Ansible, Puppet, Terraform.</li>
<li>Experience with Cloud Computing platforms like Amazon AWS, Google Cloud Platform, Heroku.</li>
<li>Experience with managing NoSQL and RDBMS.</li>
<li>Experience with queuing systems (Kafka, RabbitMQ) and Big data platforms (Hadoop).</li>
<li>Good programming skills with focus on scripting (Python, Shell, Perl).</li>
<li>Ability to analyse bottlenecks in architecture and quickly debug to reach resolution for issues.</li>
<li>Have an automation mindset and ability to reason and work with complex systems.</li>
<li>Excellent communication and documentation skills.</li>
<li>Quick learner and good mentor for junior team members</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>UNIX-like operating systems, Networking protocols, Security architectures, Identity and access management, Containerisation technologies, Infrastructure as code, Cloud Computing platforms, NoSQL and RDBMS, Queuing systems, Big data platforms, Scripting languages</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a software company that provides customer service and support solutions. It has a global presence with a large customer base.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FC0D5C3653</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>eebf21c4-d1f</externalid>
      <Title>Staff Site Reliability Engineer</Title>
      <Description><![CDATA[<p>Join our Site Reliability Engineering (SRE) team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide.</p>
<p>As a Staff Site Reliability Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking Staff SREs who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyze reliability problems across our stack, then design and implement software and systems to create step-function improvements.</p>
<p>You will design robust observability solutions, lead incident response, automate operational tasks, and continuously improve our infrastructure&#39;s reliability, all while mentoring and educating the broader engineering team to make reliability a core value at Replit.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and Implement Observability: Design, build, and lead the implementation of comprehensive monitoring, logging, and tracing solutions. Create dashboards and metrics that provide real-time visibility into system health and performance, enabling proactive issue detection.</li>
</ul>
<ul>
<li>Define and Drive Reliability Standards: Work with product and engineering teams to define, implement, and track Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Build systems to monitor and report on these metrics, holding teams accountable and ensuring we maintain high reliability standards while balancing innovation speed.</li>
</ul>
<ul>
<li>Lead Incident Management and Response: Act as a senior leader during high-impact incidents, guiding the team to rapid resolution. Conduct thorough, blameless post-mortems and drive the implementation of preventative measures. Develop and refine runbooks and build automation to reduce Mean Time To Recovery (MTTR).</li>
</ul>
<ul>
<li>Drive Automation and Infrastructure as Code: Architect, build, and improve automation to eliminate toil and operational work. Design and maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
</ul>
<ul>
<li>Optimize Performance on Kubernetes: Collaborate with core infrastructure and product teams to performance-tune and optimize our large-scale cloud deployments, with a deep focus on Kubernetes, Docker, and GCP. Identify and resolve performance bottlenecks, implement capacity planning strategies, and reduce latency across global regions.</li>
</ul>
<ul>
<li>Debug and Harden Distributed Systems: Dive deep into debugging extremely difficult technical problems across the stack. Use your findings to design and implement long-term fixes that make our systems and products more robust, operable, and easier to diagnose.</li>
</ul>
<ul>
<li>Provide Staff-Level Guidance: Review feature and system designs from across the company, acting as a key owner for the reliability, scalability, security, and operational integrity of those designs.</li>
</ul>
<ul>
<li>Educate and Mentor: Educate, mentor, and hold accountable the broader engineering team to improve the reliability of our systems, making reliability a core value of the Replit engineering culture.</li>
</ul>
<ul>
<li>Build and Integrate: Write high-quality, well-tested code in Python or Go to meet the needs of your customers, whether it&#39;s building new internal tools or integrating with third-party vendors.</li>
</ul>
<p><strong>Required Skills and Experience</strong></p>
<ul>
<li>8-10 years of experience in Site Reliability Engineering or similar roles (e.g., DevOps, Systems Engineering, Infrastructure Engineering).</li>
</ul>
<ul>
<li>Strong programming skills in languages like Python or Go. You write high-quality, well-tested code.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems. You’ve designed, built, scaled, and maintained production services and know how to compose a service-oriented architecture.</li>
</ul>
<ul>
<li>Deep experience with container orchestration platforms, specifically Kubernetes, and cloud-native technologies.</li>
</ul>
<ul>
<li>Proven track record of designing, implementing, and maintaining sophisticated monitoring and observability solutions (e.g., metrics, logging, tracing).</li>
</ul>
<ul>
<li>Strong incident management skills with extensive experience leading incident response for complex systems and demonstrated critical thinking under pressure.</li>
</ul>
<ul>
<li>Experience with infrastructure as code (e.g., Terraform, Pulumi) and configuration management tools.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, with an ability to explain complex technical concepts clearly and simply and a bias toward open, transparent cultural practices.</li>
</ul>
<ul>
<li>Strong interpersonal skills, with experience working with and mentoring engineers from junior to principal levels.</li>
</ul>
<ul>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
</ul>
<ul>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Deep experience with Google Cloud Platform (GCP) services and tools.</li>
</ul>
<ul>
<li>Expert-level knowledge of modern observability platforms (e.g., Prometheus, Grafana, Datadog, OpenTelemetry).</li>
</ul>
<ul>
<li>Experience designing and building reliable systems capable of handling high throughput and low latency.</li>
</ul>
<ul>
<li>Significant experience with Go and Terraform.</li>
</ul>
<ul>
<li>Familiarity with working in rapid-growth, startup environments.</li>
</ul>
<ul>
<li>Experience writing company-facing blog posts and training materials.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$220K - $325K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Distributed Systems, Container Orchestration, Kubernetes, Cloud-Native Technologies, Monitoring and Observability, Incident Management, Infrastructure as Code, Terraform, Pulumi, Configuration Management, Google Cloud Platform, Prometheus, Grafana, Datadog, OpenTelemetry, Go, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language, with millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/d50ad15b-82d4-452f-b4ea-2a7f5e796170</Applyto>
      <Location>Remote (United States)</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>723d3153-72d</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>At Anthropic, we are pioneering new frontiers in AI that have the potential to greatly benefit society. However, developing advanced AI also comes with risks if not properly safeguarded. That&#39;s why we are seeking an exceptional Detection and Response engineer that will be on the frontlines to build solutions to monitor for threats, rapidly investigate incidents, and coordinate response efforts with other teams. In this role, you will have the opportunity to shape our security capabilities from the ground up alongside our world-class research and security teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead cybersecurity Incident Response efforts covering diverse domains from external attacks to insider threats involving all layers of Anthropic’s technology stack</li>
<li>Develop and deploy novel tooling that may leverage Large Language Models to enhance detection, investigation, and response capabilities</li>
<li>Create and optimise detections, playbooks, and workflows to quickly identify and respond to potential incidents</li>
<li>Review Incident Response metrics and procedures and drive continuous improvement</li>
<li>Work cross functionally with other security and engineering teams</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>3+ years of software engineering experience, with security experience a plus and/or</li>
<li>5+ years of detection engineering, incident response, or threat hunting experience</li>
<li>A solid understanding of cloud environments and operations</li>
<li>Experience working with engineering teams in a SaaS environment</li>
<li>Exceptional communication and collaboration skills</li>
<li>An ability to lead projects with little guidance</li>
<li>The ability to pick up new languages and technologies quickly</li>
<li>Experience handling security incidents and investigating anomalies as part of a team</li>
<li>Knowledge of EDR, SIEM, SOAR, or related security tools</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>Experience performing security operations or investigations involving large-scale Kubernetes environments</li>
<li>A high level of proficiency in Python and query languages such as SQL</li>
<li>Experience analysing attack behaviour and prototyping high-quality detections</li>
<li>Experience with threat intelligence, malware analysis, infrastructure as code, detection engineering, or forensics</li>
<li>Experience contributing to a high growth startup environment</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>software engineering, security experience, detection engineering, incident response, threat hunting, cloud environments, operations, engineering teams, SaaS environment, communication skills, project leadership, new languages and technologies, security incidents, anomalies, EDR, SIEM, SOAR, security tools, Python, SQL, threat intelligence, malware analysis, infrastructure as code, detection engineering, forensics, Kubernetes environments, high growth startup environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4982193008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>139cd1f4-231</externalid>
      <Title>Software Engineer, Compute Efficiency</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>At Anthropic, we are building some of the most complex and large-scale AI infrastructure in the world. As that infrastructure scales rapidly, so does the imperative to optimise how we use it. As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable—without compromising reliability or latency.</p>
<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimisation frameworks that ensure every dollar of our infrastructure investment delivers maximum value. This is a high-impact, cross-functional role at the intersection of systems engineering, financial optimisation, and AI infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilisation, and costs across our cloud and datacentre fleets.</li>
</ul>
<ul>
<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimise their resource consumption.</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>
</ul>
<ul>
<li>Partner closely with cloud service providers and internal stakeholders to optimise cluster configurations, workload placement, and resource utilisation across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>
</ul>
<ul>
<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>
</ul>
<ul>
<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>
</ul>
<ul>
<li>Drive architectural improvements and code-level optimisations across multiple services and platforms to deliver measurable utilisation and performance gains.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>
</ul>
<ul>
<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>
</ul>
<ul>
<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>
</ul>
<ul>
<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>
</ul>
<ul>
<li>Experience optimising end-to-end performance of distributed systems, including workload right-sizing and resource utilisation tuning.</li>
</ul>
<ul>
<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>
</ul>
<ul>
<li>Experience designing or working with performance and utilisation monitoring tools in large-scale, distributed environments.</li>
</ul>
<ul>
<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills—you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>
</ul>
<ul>
<li>Low level systems experience, for example linux kernel tuning and eBPF</li>
</ul>
<ul>
<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>
</ul>
<ul>
<li>Published work in performance optimisation and scaling distributed systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000USD</Salaryrange>
      <Skills>distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, performance optimisation, scalability, continuous improvement, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, systems design tradeoffs, published work in performance optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation building some of the most complex and large-scale AI infrastructure in the world.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108982008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>25934fbc-c50</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.</p>
<p>Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic&#39;s most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale.</p>
<p>Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
<li>Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
<li>Contribute to inference features that must work consistently across all platforms</li>
<li>Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
<li>Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
<li>Have strong interest in inference</li>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p><strong>Strong Candidates May Also Have Experience With</strong></p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
<li>Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments</li>
<li>Strong familiarity with LLM inference optimization, batching, caching, and serving strategies</li>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
<li>Proficiency in Python or Rust</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $485,000 USD</Salaryrange>
      <Skills>Software engineering, Cloud infrastructure, Kubernetes, Infrastructure as Code, Container orchestration, LLM inference optimization, Batching, Caching, Serving strategies, Machine learning infrastructure, GPUs, TPUs, Trainium, AI accelerators, CI/CD systems, Deployment and validation, Cloud environments, Multi-region deployments, Geographic routing, Global traffic management, Python, Rust, Cloud platforms, Networking, Security, Privacy, Billing, Managed service offerings, Platform-agnostic tooling, Abstraction layers, Capacity management, Cost optimization, Resource planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>9d1edc39-95f</externalid>
      <Title>Senior Engineer, Datacenter Server Lifecycle</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is expanding beyond cloud infrastructure, and this role sits at the heart of that effort. As a Senior Engineer on the Datacenter Machine Lifecycle team, you will own the end-to-end operational journey of every machine in our facility — from initial provisioning and deployment, across its working life, through maintenance and refresh, and all the way to decommissioning. This is greenfield work: you will help define the processes, tooling, and operational standards that govern how we run and retire hardware at scale.</p>
<p>A distinguishing aspect of this role is its deep intersection with security. The machines in our datacenter handle some of the most sensitive workloads in AI — training frontier models and serving millions of users interacting with Claude. Ensuring that every machine in the fleet is trusted, attested, and operating with a verified chain of integrity from the hardware up is a core part of the job, not an afterthought. You will partner closely with our Infrastructure Security team to define and enforce trusted compute standards across the lifecycle, from secure provisioning through end-of-life handling.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead the build-out of automation to support datacenters containing tens of thousands of servers.</li>
<li>Own and define the end-to-end machine lifecycle strategy — from provisioning and deployment through operation, maintenance, refresh, and decommissioning — and maintain automation and operational procedures for common lifecycle events (e.g. hardware failures, firmware upgrades, fleet rotations).</li>
<li>Partner closely with Infrastructure Security to design and enforce trusted compute standards across the machine lifecycle.</li>
<li>Work closely with our Networking team to ensure end-to-end connectivity across all sites.</li>
<li>Build and maintain tooling to track machine health, configuration, and operational status across the full datacenter fleet.</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 5+ years of experience in datacenter operations, hardware infrastructure management, or a closely related discipline.</li>
<li>Have deep, hands-on experience with server hardware — including rack deployment, cabling, troubleshooting, and understanding failure modes at scale.</li>
<li>Understand hardware lifecycle management end-to-end: asset tracking, provisioning workflows, maintenance scheduling, and decommissioning practices.</li>
<li>Have strong proficiency in at least one programming language (e.g., Python, Rust, Go, or Java).</li>
<li>Are comfortable navigating ambiguity and working independently to drive progress on complex, cross-functional problems.</li>
<li>Communicate clearly and can build consensus with a wide range of stakeholders.</li>
<li>Have working knowledge of modern cloud infrastructure, including Kubernetes, Infrastructure as Code, AWS, and GCP.</li>
<li>Are comfortable with occasional travel to datacenter sites across North America.</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Hands-on experience with GPU or AI accelerator hardware (e.g. NVIDIA A100/H100, AMD MI300, Google TPUs, or AWS Trainium) and an understanding of their operational demands.</li>
<li>Familiarity with modern provisioning tooling such as coreboot, LinuxBoot, or u-root.</li>
<li>Experience building or contributing to datacenter automation or fleet management platforms.</li>
<li>Experience building and deploying server operating system distributions across thousands of hosts.</li>
<li>A background in large-scale capacity planning and hardware refresh strategy, ideally at a hyperscaler or large cloud provider.</li>
<li>Experience with trusted compute and hardware security concepts such as secure boot, TPM, hardware attestation, and firmware verification — or a strong desire to develop deep expertise in this area.</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£255,000 - £325,000GBP</Salaryrange>
      <Skills>datacenter operations, hardware infrastructure management, server hardware, programming language, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, GPU or AI accelerator hardware, modern provisioning tooling, datacenter automation, fleet management platforms, server operating system distributions, trusted compute and hardware security concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It is working on building beneficial AI systems for users and society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5131038008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d4cc9e89-c94</externalid>
      <Title>Infrastructure Engineer, Sandboxing</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the role</strong></p>
<p>Anthropic is seeking an experienced Infrastructure Engineer to join our Sandboxing team within the Research organisation. In this role, you&#39;ll build and scale the systems that enable researchers to safely execute and experiment with AI-generated code and interactions in isolated environments.</p>
<p>As our models become more capable, the infrastructure supporting secure execution environments becomes increasingly critical. You&#39;ll work on distributed systems that must operate reliably at significant scale while maintaining strong security boundaries. Your work will directly support Anthropic&#39;s mission to develop AI systems that are safe, beneficial, and trustworthy.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and operate distributed backend systems that power secure sandboxed execution environments</li>
<li>Scale infrastructure to meet growing research and product demands while maintaining reliability and performance</li>
<li>Implement and maintain serverless architectures and container orchestration systems</li>
<li>Collaborate with research teams to understand requirements and translate them into robust infrastructure solutions</li>
<li>Develop monitoring, alerting, and observability systems to ensure operational excellence</li>
<li>Participate in on-call rotations and incident response to maintain system reliability</li>
<li>Contribute to infrastructure automation and tooling that improves developer productivity</li>
<li>Partner with security teams to ensure sandboxing infrastructure maintains appropriate isolation guarantees</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>Have 5+ years of experience building and operating backend infrastructure at scale</li>
<li>Have deep expertise in distributed systems design and implementation</li>
<li>Have strong operational experience, including debugging complex production issues</li>
<li>Are proficient with cloud platforms, particularly GCP/GCS (experience with AWS or Azure is also valuable)</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and understand their security implications</li>
<li>Are comfortable working with infrastructure as code and modern DevOps practices</li>
<li>Have strong programming skills in languages such as Python, Go, or Rust</li>
<li>Are results-oriented with a bias towards flexibility and impact</li>
<li>Care about the societal impacts of your work and are motivated by Anthropic&#39;s mission</li>
</ul>
<p><strong>Strong candidates may also have experience with</strong></p>
<ul>
<li>Serverless architectures and functions-as-a-service platforms (Cloud Functions, Cloud Run, Lambda)</li>
<li>Designing and implementing secure multi-tenant systems</li>
<li>High-performance computing environments or ML infrastructure</li>
<li>Linux systems internals, including namespaces, cgroups, and seccomp</li>
<li>Network security and isolation techniques</li>
<li>Building systems that support research workflows and rapid iteration</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship</strong></p>
<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>
<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong></p>
<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that everyone is aligned and working towards the same goals.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>Distributed systems design and implementation, Cloud platforms (GCP/GCS, AWS, Azure), Containerization technologies (Docker, Kubernetes), Infrastructure as code and modern DevOps practices, Programming skills in languages such as Python, Go, or Rust, Serverless architectures and functions-as-a-service platforms, Secure multi-tenant systems, High-performance computing environments or ML infrastructure, Linux systems internals, Network security and isolation techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030680008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>b8b7f851-497</externalid>
      <Title>Sr. Systems Engineer - IAM</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>We are looking for a Senior/Staff Systems Engineer - IAM to secure identities including end user accounts, service accounts, application identities, APIs, AI agents, and automated workloads across Replit’s IT environment.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a technical expert to assess our current state of IAM and design a modern and scalable access strategy across our cloud-first infrastructure. The ideal candidate combines deep technical expertise, operational rigor, and a customer-first mindset.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Serve as the technical owner of Replit’s corporate IT identity architecture</li>
<li>Design and implement scalable authentication and authorization solutions (SSO, phishing resistant MFA, passwordless, tokens, device trust, zero trust)</li>
<li>Architect lifecycle management workflows to support a rapid growth corporate IT environment</li>
<li>Evaluate technologies to protect against current and emerging threats</li>
<li>Partner with internal teams to implement and maintain provisioning/deprovisioning workflows via SCIM, APIs, and custom automations</li>
<li>Support SOC 2, ISO 27001 and SOX controls related to identity governance</li>
<li>Serve as the enterprisewide subject matter expert and escalation point for complex authentication and authorization inquiries and issues</li>
<li>Mentor IT and security engineers on identity best practices</li>
<li>Additional duties as assigned</li>
</ul>
<p><strong>Required Skills &amp; Experience</strong></p>
<ul>
<li>8+ years experience in identity and access management tools and platforms with at least 5 years of hands-on Okta experience</li>
<li>Expert in authentication and federation technologies (SSO, SAML, OAuth/OIDC, SCIM)</li>
<li>Deep knowledge of identity lifecycle management and access governance within HRIS and SaaS platforms</li>
<li>Proficient in one or more workflow automation platforms such as Workato, Zapier, Okta Workflows, or equivalent</li>
<li>Experience deploying Infrastructure as Code with tools such as Terraform, Google Cloud Deployment Manager, AWS Cloud Formation</li>
<li>Strong communications skills with the ability to convey IAM concepts to a non-technical audience</li>
<li>Demonstrated experience serving as a technical advisor for cross-functional teams to ensure IAM integrates into a wider security strategy</li>
</ul>
<p><strong>Bonus Qualifications</strong></p>
<ul>
<li>Active Replit user and passionate about making software creation more accessible</li>
<li>Strong understanding of networking and networking concepts</li>
<li>Been part of a rapid growth SaaS startup</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
<li>Reasons not to work at Replit</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>$95K - $200K</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$95K - $200K</Salaryrange>
      <Skills>identity and access management, Okta, authentication and federation technologies, SCIM, workflow automation platforms, Infrastructure as Code, networking and networking concepts, rapid growth SaaS startup</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a cloud-first infrastructure provider.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/6fc855ec-0cbe-45a2-9907-71a15c5d188b</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>8c164f95-f8d</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Senior Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking Senior Infrastructure Engineers who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyse reliability problems across our stack, then design and implement software and systems to address them. You will build robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability.</p>
<p><strong>You Will:</strong></p>
<ul>
<li>Drive Automation and Infrastructure as Code: Build and improve automation to eliminate toil and operational work. Maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks and implement capacity planning strategies.</li>
<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>
<li>Drive Cross-Team Improvements: Partner with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>
<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the engineering lifecycle, from local development to production monitoring.</li>
<li>Debug and Harden Systems: Dive deep into debugging difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>
<li>Collaborate on Design Reviews: Participate in feature and system design reviews, contributing expertise on security, scale, and operational considerations.</li>
<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>
</ul>
<p><strong>Required Skills and Experience:</strong></p>
<ul>
<li>4+ years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering).</li>
<li>Strong programming skills in languages like Python or Go.</li>
<li>You write high-quality, well-tested code.</li>
<li>Solid understanding of distributed systems. You&#39;ve built, scaled, and maintained production services and understand service-oriented architecture.</li>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>
<li>Experience implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>
<li>Strong incident management skills with experience participating in incident response and demonstrated critical thinking under pressure.</li>
<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>
<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly.</li>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Experience with Google Cloud Platform (GCP) services and tools.</li>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>
<li>Experience building reliable systems capable of handling high throughput and low latency.</li>
<li>Experience with Go and Terraform.</li>
<li>Familiarity with working in rapid-growth environments.</li>
</ul>
<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K - $240K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Terraform, Kubernetes, Docker, GCP, Monitoring/observability solutions, Debugging and performance tuning, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform (GCP) services and tools, Modern observability platforms (Prometheus, Grafana, Datadog, etc.), Building reliable systems capable of handling high throughput and low latency, Go and Terraform, Familiarity with working in rapid-growth environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading platform in the software development industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/16c85abc-763c-4f36-ab67-64f416343384</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>b7de618e-5e1</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>Join our Site Reliability Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Site Reliability Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking SREs who are passionate about building and maintaining resilient systems at scale. Your mission will be to design and implement robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability and performance.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and Implement Observability Solutions: Develop comprehensive monitoring and alerting systems using modern observability tools. Create dashboards and metrics that provide real-time visibility into system health and performance. Implement logging strategies that enable quick problem identification and resolution.</li>
</ul>
<ul>
<li>Drive Automation and Infrastructure as Code: Architect and implement infrastructure automation solutions using tools like Terraform, Ansible, or Pulumi. Design and maintain CI/CD pipelines that enable reliable and consistent deployments. Create self-healing systems that can automatically respond to common failure scenarios.</li>
</ul>
<ul>
<li>Establish SLOs and SLIs: Work with product and engineering teams to define and implement Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Build systems to track and report on these metrics, ensuring we maintain high reliability standards while balancing innovation speed.</li>
</ul>
<ul>
<li>Incident Management and Response: Lead incident response efforts, conducting thorough post-mortems, and implementing improvements to prevent future occurrences. Develop and maintain runbooks for critical services. Build tools and processes that reduce Mean Time To Recovery (MTTR).</li>
</ul>
<ul>
<li>Performance Optimization: Identify and resolve performance bottlenecks across our infrastructure. Implement capacity planning strategies and optimize resource utilization. Work on reducing latency and improving system efficiency across global regions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4-8 years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering)</li>
</ul>
<ul>
<li>Strong programming skills in languages commonly used for automation (Python, Go, or similar)</li>
</ul>
<ul>
<li>Deep understanding of distributed systems</li>
</ul>
<ul>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies</li>
</ul>
<ul>
<li>Proven track record of implementing and maintaining monitoring/observability solutions</li>
</ul>
<ul>
<li>Strong incident management skills with experience leading incident response</li>
</ul>
<ul>
<li>Experience with infrastructure as code and configuration management tools</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with Google Cloud Platform (GCP) services and tools</li>
</ul>
<ul>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.)</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Problem-solving mindset: Ability to approach complex operational challenges systematically and devise effective solutions</li>
</ul>
<ul>
<li>Self-directed and autonomous: Capable of working independently while collaborating effectively with cross-functional teams</li>
</ul>
<ul>
<li>Strong communication skills: Ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<ul>
<li>Continuous learning: Passion for staying current with industry best practices and new technologies</li>
</ul>
<ul>
<li>Focus on automation: Strong belief in automating repetitive tasks and building self-healing systems</li>
</ul>
<p><strong>Full-Time Employee Benefits Include</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
</ul>
<ul>
<li>401(k) Program with a 4% match</li>
</ul>
<ul>
<li>Health, Dental, Vision and Life Insurance</li>
</ul>
<ul>
<li>Short Term and Long Term Disability</li>
</ul>
<ul>
<li>Paid Parental, Medical, Caregiver Leave</li>
</ul>
<ul>
<li>Commuter Benefits</li>
</ul>
<ul>
<li>Monthly Wellness Stipend</li>
</ul>
<ul>
<li>Autonomous Work Environment</li>
</ul>
<ul>
<li>In Office Set-Up Reimbursement</li>
</ul>
<ul>
<li>Flexible Time Off (FTO) + Holidays</li>
</ul>
<ul>
<li>Quarterly Team Gatherings</li>
</ul>
<ul>
<li>In Office Amenities</li>
</ul>
<p><strong>Want to Learn More About What We Are Up To?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
</ul>
<ul>
<li>Replit: Make an app for that</li>
</ul>
<ul>
<li>Replit Blog</li>
</ul>
<ul>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
</ul>
<ul>
<li>Reasons not to work at Replit</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160K - $250K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading provider of software development tools.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/f6e6158e-eb89-4008-81ea-1b7512bc509d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>b854cfc3-84f</externalid>
      <Title>Engineering Manager, Enterprise Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring an Engineering Manager to lead the team that builds the infrastructure foundations for Replit Enterprise, enabling the deployment flexibility, networking capabilities, and data controls that large organisations require. You&#39;ll own the roadmap for enterprise infrastructure features including single-tenant deployments, private networking, data residency, and customer-managed encryption, partnering closely with Platform &amp; Infrastructure Engineering, Security and Sales to unlock adoption at the world&#39;s most demanding organisations.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li><strong>Own enterprise infrastructure foundations</strong>: Lead the roadmap for single-tenant and dedicated deployment options, VPC peering and private connectivity, static IP configurations, and regional data residency ensuring enterprises can run Replit within their security and compliance boundaries.</li>
</ul>
<ul>
<li><strong>Build advanced data protection capabilities</strong>: Design and ship features like bring-your-own-key (BYOK) encryption, customer-managed keys, and enhanced data isolation controls that give enterprises ownership over their most sensitive data.</li>
</ul>
<ul>
<li><strong>Partner with go-to-market</strong>: Work hand-in-hand with Sales and Customer Success to understand enterprise infrastructure requirements, unblock deployments, and translate field feedback into the roadmap.</li>
</ul>
<ul>
<li><strong>Scale the team</strong>: Hire, coach, and retain exceptional infrastructure engineers; create a high-ownership culture with clear execution rituals, high code quality, and thoughtful on-call practices.</li>
</ul>
<ul>
<li><strong>Drive technical strategy</strong>: Define SLAs/SLOs for enterprise infrastructure reliability; evaluate build-vs-buy decisions for complex infrastructure capabilities; ensure our platform meets evolving enterprise security and compliance needs.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>6–10+ years in engineering with 3+ years managing teams building cloud infrastructure, platform engineering, or enterprise SaaS at scale.</li>
</ul>
<ul>
<li>Depth in cloud infrastructure and networking: you&#39;ve shipped production systems involving VPCs, private connectivity, multi-tenant isolation, or dedicated/single-tenant architectures.</li>
</ul>
<ul>
<li>Strong technical leadership across infrastructure technologies: Kubernetes, cloud platforms (GCP/AWS), networking, and Infrastructure as Code.</li>
</ul>
<ul>
<li>Experience with enterprise security and compliance requirements: encryption at rest/in transit, key management, data residency, or similar controls.</li>
</ul>
<ul>
<li>Cross-functional chops: you&#39;ve partnered with Security, Sales, and Support to scope and deliver infrastructure capabilities that close enterprise deals.</li>
</ul>
<ul>
<li>Excellent hiring, coaching, and performance management skills; crisp written/async communication.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building enterprise deployment options (dedicated tenants, private cloud, on-prem) for a SaaS platform.</li>
</ul>
<ul>
<li>Familiarity with enterprise compliance frameworks (SOC 2, FedRAMP, HIPAA) and how they translate to infrastructure requirements.</li>
</ul>
<ul>
<li>Experience with developer platforms, cloud IDEs, or developer productivity products.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$250K - $350K</Salaryrange>
      <Skills>cloud infrastructure, networking, Kubernetes, cloud platforms (GCP/AWS), Infrastructure as Code, enterprise security and compliance requirements, encryption at rest/in transit, key management, data residency, experience building enterprise deployment options, familiarity with enterprise compliance frameworks, experience with developer platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/7cba072f-9490-4c80-8782-3b8d0398b1a8</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>35fc7f23-917</externalid>
      <Title>Software Engineer, Growth Infrastructure</Title>
      <Description><![CDATA[<p>We are looking for an experienced Growth Infrastructure Engineer to build and maintain the technical backbone that enables scalable growth experiments, high-performance data pipelines, and automated systems that drive user acquisition, engagement, and product iteration.</p>
<p>This role sits at the intersection of growth, product, and infrastructure — combining deep technical engineering with experimentation and data-driven optimization. You will collaborate with product, data science, and backend teams to ensure that growth initiatives run smoothly and scale efficiently across systems.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Growth Infrastructure &amp; Systems</strong></p>
<ul>
<li>Design, implement, and maintain scalable infrastructure that supports growth and experimentation needs.</li>
<li>Build and optimize analytics pipelines to capture key product and growth metrics (acquisition, activation, retention, etc.).</li>
<li>Develop automated workflows for user onboarding, campaign delivery, and performance tracking.</li>
</ul>
<p><strong>Experimentation &amp; Optimization</strong></p>
<ul>
<li>Support A/B testing frameworks and integrate them into production systems.</li>
<li>Enable reliable data collection and evaluation for growth experiments.</li>
<li>Automate deployment and rollout of growth feature flags and tests.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with Growth Product Managers, Data Engineers, and Analysts to define technical requirements for growth initiatives.</li>
<li>Translate business goals into technical specifications and system designs.</li>
<li>Provide guidance on performance, reliability, and scalability trade-offs.</li>
</ul>
<p><strong>Monitoring &amp; Reliability</strong></p>
<ul>
<li>Implement monitoring and alerting for growth infrastructure services.</li>
<li>Troubleshoot production issues and optimize for uptime and performance.</li>
<li>Ensure data quality and consistency for reporting and decision-making.</li>
</ul>
<p><strong>Continuous Improvement</strong></p>
<ul>
<li>Evaluate new tools, frameworks, and platforms that accelerate growth engineering.</li>
<li>Drive best practices in infrastructure as code, CI/CD, and automated testing.</li>
<li>Train and mentor teammates on growth infrastructure principles.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science, Software Engineering, or related technical field.</li>
<li>3+ years of experience building backend or infrastructure-focused services.</li>
<li>Strong programming skills in languages such as Python, Go, or JavaScript.</li>
<li>Experience with cloud platform infrastructure (e.g., AWS, GCP, Azure).</li>
<li>Solid understanding of data pipelines, ETL processes, and databases.</li>
<li>Experience with CI/CD systems and Infrastructure as Code (Terraform, CloudFormation, etc.).</li>
<li>Experience with experimentation platforms (StatSig, Segment, LaunchDarkly, or in-house).</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Familiarity with growth metrics and product analytics tools (Amplitude, etc.).</li>
<li>Experience designing and scaling A/B testing systems and feature flag orchestration.</li>
<li>Proficient with containerization (Docker, Kubernetes) and distributed systems.</li>
<li>Knowledge of observability tools (Datadog, etc).</li>
</ul>
<p><strong>What Success Looks Like</strong></p>
<ul>
<li>Growth initiatives that deploy reliably and quickly with minimal manual intervention.</li>
<li>High-fidelity data pipelines that enable real-time insight into key growth metrics.</li>
<li>Growth teams are empowered to run experiments and launch campaigns without heavy infrastructure support.</li>
</ul>
<p><strong>Why Join Us?</strong></p>
<ul>
<li>This is a rare opportunity to be among the first engineers on a newly formed Growth team, with significant ownership and influence over both technical direction and product outcomes.</li>
<li>You’ll be working on a product that scaled from ~2M to 250M users in under a year, operating in a massive and still largely untapped market.</li>
<li>The impact of your work will be visible, measurable, and foundational to how the company grows next.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $290K</Salaryrange>
      <Skills>Python, Go, JavaScript, AWS, GCP, Azure, CI/CD, Infrastructure as Code, Terraform, CloudFormation, Experimentation platforms, Growth metrics, Product analytics tools, Containerization, Distributed systems, Observability tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/37f81c18-c742-4f7d-bf81-34c3f5142973</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>323bc85d-b69</externalid>
      <Title>Staff Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role:</strong></p>
<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Staff Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Drive Automation and Infrastructure as Code: Architect, build, and improve automation to eliminate toil and operational work. Design and maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
</ul>
<ul>
<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks, implement capacity planning strategies, and reduce latency across global regions.</li>
</ul>
<ul>
<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>
</ul>
<ul>
<li>Drive Cross-Company Improvements: Partner directly with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>
</ul>
<ul>
<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the entire engineering lifecycle, from local development to production monitoring.</li>
</ul>
<ul>
<li>Debug and Harden Systems: Dive deep into debugging extremely difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>
</ul>
<ul>
<li>Provide Staff-Level Guidance: Review feature and system designs, acting as an owner for the security, scale, and operational integrity of those designs.</li>
</ul>
<ul>
<li>Educate and Mentor: Educate, mentor, and hold accountable the engineering team to improve the reliability of our systems, making reliability a core value of the Replit engineering culture.</li>
</ul>
<ul>
<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>
</ul>
<p><strong>Required Skills and Experience:</strong></p>
<ul>
<li>8-10 years of experience in Infrastructure Engineering or similar roles (DevOps, Systems Engineering, Site Reliability Engineering).</li>
</ul>
<ul>
<li>Strong programming skills in languages like Python or Go.</li>
</ul>
<ul>
<li>You write high-quality, well-tested code.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems. You&#39;ve designed, built, scaled, and maintained production services and know how to compose a service-oriented architecture.</li>
</ul>
<ul>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>
</ul>
<ul>
<li>Proven track record of implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>
</ul>
<ul>
<li>Strong incident management skills with experience leading incident response and demonstrated critical thinking under pressure.</li>
</ul>
<ul>
<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly and simply and a bias toward open, transparent cultural practices.</li>
</ul>
<ul>
<li>Strong interpersonal skills, with experience working with engineers from junior to principal levels.</li>
</ul>
<ul>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
</ul>
<ul>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Deep experience with Google Cloud Platform (GCP) services and tools.</li>
</ul>
<ul>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>
</ul>
<ul>
<li>Experience designing and building reliable systems capable of handling high throughput and low latency.</li>
</ul>
<ul>
<li>Experience with Go and Terraform.</li>
</ul>
<ul>
<li>Familiarity with working in rapid-growth environments.</li>
</ul>
<ul>
<li>Experience writing company-facing blog posts and training materials.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
</ul>
<ul>
<li>401(k) Program with a 4% match</li>
</ul>
<ul>
<li>Health, Dental, Vision and Life Insurance</li>
</ul>
<ul>
<li>Short Term and Long Term Disability</li>
</ul>
<ul>
<li>Paid Parental, Medical, Caregiver Leave</li>
</ul>
<ul>
<li>Commuter Benefits</li>
</ul>
<ul>
<li>Monthly Wellness Stipend</li>
</ul>
<ul>
<li>Autonomous Work Environment</li>
</ul>
<ul>
<li>In Office Set-Up Reimbursement</li>
</ul>
<ul>
<li>Flexible Time Off (FTO) + Holidays</li>
</ul>
<ul>
<li>Quarterly Team Gatherings</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$220K – $325K</Salaryrange>
      <Skills>Infrastructure Engineering, DevOps, Systems Engineering, Site Reliability Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog, Go, Terraform, Rapid-growth environments, Company-facing blog posts, Training materials</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/6481ec1e-527c-4c1f-a041-2fb5021e7bd5</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
  </jobs>
</source>