<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>af9a2709-b72</externalid>
      <Title>SDV (Senior) Manager Connectivity Architect</Title>
      <Description><![CDATA[<p>As an SDV (Senior) Manager Connectivity Architect, you will be responsible for defining, evaluating, and developing target and reference architectures for SDV connectivity, edge, and cloud platforms in the automotive and IoT context.</p>
<p>Your tasks will include:</p>
<ul>
<li>Defining, evaluating, and further developing target and reference architectures for SDV connectivity, edge, and cloud platforms in the automotive and IoT context</li>
</ul>
<ul>
<li>Architectural review and integration of IoT and connectivity platforms (e.g., Azure IoT, AWS IoT Core, Bosch IoT Suite, Siemens MindSphere, or comparable platforms)</li>
</ul>
<ul>
<li>Defining edge architectures, containerization, orchestration, and local data processing (e.g., Docker, Kubernetes, edge gateways, local analytics)</li>
</ul>
<ul>
<li>Designing security-by-design architectures over device, network, edge, and cloud (identities, PKI, certificate lifecycle, secure boot, TPM, zero-trust approaches)</li>
</ul>
<ul>
<li>Technical leadership and architectural responsibility for mobile, IP, IoT, and service-based communication architectures from access network to application level (including 4G/5G, LTE-M, NB-IoT, NTN, IMS, TCP/IP, MQTT, AMQP, OPC UA, CoAP, HTTP/REST)</li>
</ul>
<ul>
<li>Architectural responsibility for device and platform lifecycle topics (provisioning, OTA updates, configuration, and version management)</li>
</ul>
<ul>
<li>Architectural design of edge-to-cloud communication data flows, service interfaces, scalability, and resilience concepts</li>
</ul>
<ul>
<li>Technical leadership and sparring for project teams, customers, and partners, as well as support for proposal, strategy, and scaling topics</li>
</ul>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>A bachelor&#39;s degree in computer science, technical computer science, communications engineering, electrical engineering, or a related field</li>
</ul>
<ul>
<li>At least 5 years of experience in the architecture of complex, scalable connectivity, IoT, edge, or cloud platforms</li>
</ul>
<ul>
<li>A passion for designing scalable connectivity, IoT, edge, and cloud architectures and for creating complex technical connections in a holistic, sustainable, and forward-looking manner</li>
</ul>
<ul>
<li>Expertise in defining and taking responsibility for complex system and reference architectures for distributed connectivity, IoT, and cloud platforms</li>
</ul>
<ul>
<li>Deep knowledge of architecture, integration, and security, including modern protocols and security-by-design principles</li>
</ul>
<ul>
<li>Structured, analytical, and decision-making work style, with the ability to present complex technical content clearly and convincingly on an architectural, decision-making, and management level</li>
</ul>
<p>MHP offers a dynamic and supportive work environment where you can grow professionally and personally. We provide a range of benefits, including:</p>
<ul>
<li>Recognition and appreciation for our employees</li>
</ul>
<ul>
<li>Encouragement of creativity and new ideas</li>
</ul>
<ul>
<li>Flexibility in terms of time and location</li>
</ul>
<ul>
<li>Opportunities for professional growth and development</li>
</ul>
<p>If you are interested in this opportunity, please submit your application through our job locator. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary</Salaryrange>
      <Skills>Architecture, Cloud computing, IoT, Edge computing, Security, Networking, Communication protocols, Containerization, Orchestration, Local data processing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain.</Employerdescription>
      <Employerwebsite>https://mhp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20433</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>e97f854b-b4c</externalid>
      <Title>SDV Architect for Test &amp; vECU</Title>
      <Description><![CDATA[<p>Are you looking for a role where you can shape the future of testing and validation in the automotive industry? As an SDV Architect for Test &amp; vECU, you will play a leading role in the development of modern test strategies and test environments. You will combine technical depth with strategic thinking and clear, compelling communication.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Developing and implementing a comprehensive shift-left test strategy for SDV systems based on vECU virtualisation, including business case evaluation, tool comparison, scalability, and cost models</li>
<li>Designing, building, and further developing cloud-based test environments for embedded and software-defined vehicle systems</li>
<li>Providing strategic advice to OEMs, Tier-1s, and technology partners on cloud, hybrid, and hardware test approaches throughout the entire development cycle</li>
<li>Evaluating, selecting, and integrating leading vECU solutions into existing toolchains and CI/CD pipelines</li>
<li>Active stakeholder management, including customer contact, pre-sales support, and presenting strategic recommendations at the management level</li>
<li>Introducing and scaling AI-driven methods in quality assurance, such as intelligent test prioritisation, anomaly detection, or data-driven optimisation of test strategies</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>A degree in computer science, electrical engineering, or a related field, combined with over 5 years of experience in test automation of embedded or SDV systems (e.g. ADAS, infotainment, or zone architectures)</li>
<li>Practical experience with vECU approaches from at least two projects, including setting up cloud-based test environments and integration into CI/CD pipelines</li>
<li>Passion for modern software-defined vehicle architectures, innovative test methods, and the use of AI in quality assurance, such as intelligent test case generation, fault classification, or anomaly detection</li>
<li>Expertise in cloud and virtualisation technologies, particularly container and VM-based test infrastructures, as well as hybrid cloud/HIL scenarios</li>
<li>Strong understanding of the market and technology landscape in the areas of vECU, SDV, and automotive testing, combined with experience with common toolchains</li>
</ul>
<p>As an ideal candidate, you will be characterised by a structured and analytical approach, the ability to clearly present complex and unclear problem statements, and derive decision options for management and technology. You will also possess strong communication and advisory skills, as well as a leadership mindset with technical expertise, clarity, and initiative.</p>
<p>The position is available immediately, and we offer a competitive salary and benefits package. If you are interested in this exciting opportunity, please submit your application, including your resume and cover letter, to our careers portal.</p>
<p>We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>vECU, SDV, test automation, cloud computing, virtualisation, containerisation, VM-based test infrastructure, hybrid cloud/HIL scenarios, AI-driven methods, quality assurance, test prioritisation, anomaly detection, data-driven optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalises processes and products for its customers, and accompanies them in their IT transformations along the entire value chain.</Employerdescription>
      <Employerwebsite>https://www.mhp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20406</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bee517db-e9c</externalid>
      <Title>DevOps Engineer (all genders)</Title>
      <Description><![CDATA[<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>
<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>
<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>
<li>Container Orchestration: Kubernetes with Helm</li>
<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>
<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>
<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>
<li>Scripting: Python, Go, Bash</li>
<li>Version Control: GitHub</li>
<li>Collaboration: Jira (Agile)</li>
<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a DevOps Engineer, you will be responsible for:</p>
<ul>
<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>
<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>
<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>
<li>Troubleshooting production issues and participating in capacity planning</li>
<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>
<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>
<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>
<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>
<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>
<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>
<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>
<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>
<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>
<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>
<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>
<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>
<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>
<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with Helm charts and Kubernetes package management</li>
<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>
<li>Experience with designing AWS services-based architectures is a plus</li>
<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>
<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>
<li>Exposure to cost optimization strategies for cloud infrastructure</li>
<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>
<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>
<li>Technology: Work in a modern tech environment</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud, Container Orchestration, Infrastructure as Code, Monitoring &amp; Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2595036</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77c41bb-0ad</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert, you will have direct experience in a wide range of security technologies, tools, and methodologies. The role is suited for an experienced Application Security engineer with proven understanding in enterprise security and AI security and will focus on building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm’s information and computer systems. Millennium is a complex and robust technical environment and securing the Firm from external and internal threats is a top priority.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</li>
<li>Conduct specialized threat modeling, red teaming, and risk assessments for AI/ML models (e.g., testing for prompt injection, model theft, and data poisoning).</li>
<li>Lead risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects.</li>
<li>Engage throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards.</li>
<li>Evangelize AppSec and AI security best practices through developer education, training materials, and outreach.</li>
<li>Design robust security architectures and integrate automated security testing (SAST/DAST/SCA) into CI/CD pipelines.</li>
<li>Partner with Technology, Trading, Legal, and Compliance to create policies and communicate technical risks to non-technical stakeholders.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field.</li>
<li>5+ years’ experience working as an Application Security Engineer, Software Engineer, or similar role.</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs.</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI.</li>
<li>Experience in working with global teams, collaborating on code and presentations.</li>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols.</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions.</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar.</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies.</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus.</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Automated security testing, CI/CD pipelines, Security architectures, Secure configuration principles, Cryptography fundamentals, Encryption protocols, SCM &amp; CI/CD technologies, Security scanning, Vulnerability management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets management, Password vault technologies, Secure programming, Infrastructure as Code tools, Web application security testing tools, Methodologies, Security frameworks, Standards, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a technology-focused organisation that provides infrastructure services to various businesses.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629927</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a75ea8b-5b4</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies, you will play a key role in building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The successful candidate will have a proven understanding in enterprise security and AI security and will focus on defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and implementing security guardrails for Generative AI, LLMs, and Agentic frameworks</li>
<li>Conducting specialized threat modeling, red teaming, and risk assessments for AI/ML models</li>
<li>Leading risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects</li>
<li>Engaging throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards</li>
<li>Evangelizing AppSec and AI security best practices through developer education, training materials, and outreach</li>
</ul>
<p>Qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field</li>
<li>5+ years&#39; experience working as an Application Security Engineer, Software Engineer, or similar role</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI</li>
<li>Experience in working with global teams, collaborating on code and presentations</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous</li>
</ul>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Developer education, Training materials, Outreach, Common SCM &amp; CI/CD technologies, GitHub, Jenkins, Artifactory, Security Scanning, Vulnerability Management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets Management &amp; Password Vault technologies, Delinea Secret Server, Hashicorp Vault, Secure programming, Python, Java, C++, C#, Infrastructure as Code tools, CloudFormation, Terraform, Ansible, Web application security testing tools, Methodologies, Security frameworks, Standards, ISO 27001, NIST, OWASP, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a department within a larger organisation that focuses on providing and maintaining the underlying technology infrastructure.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629908</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32af4444-bb2</externalid>
      <Title>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</Title>
      <Description><![CDATA[<p>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</p>
<p>The successful candidate will join a global team responsible for designing and developing Equities Volatility, Risk, PnL, and Market Data systems.</p>
<p>You will work hands-on with other developers, QA, and production support, and will partner closely with Portfolio Managers, Middle Office, and Risk Managers.</p>
<p>We are looking for a very strong senior engineer with deep knowledge of equity derivatives products and their pricing and risk characteristics.</p>
<p>You must be a highly capable hands-on developer with a solid understanding of front-to-back trading system workflows, especially pricing and risk.</p>
<p>Excellent communication skills, strong ownership, and the ability to work effectively in a fast-paced, collaborative environment are essential.</p>
<p>Experience in Unix/Linux environments is required; exposure to cloud and containerization technologies is a plus.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain real-time equity derivatives pricing and risk systems (including volatility and PnL components).</li>
</ul>
<ul>
<li>Implement robust, scalable, and low-latency server-side components in a multi-threaded environment.</li>
</ul>
<ul>
<li>Collaborate with portfolio managers, risk, and middle office to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Contribute to UI components as needed (and learn new UI technologies where required).</li>
</ul>
<ul>
<li>Write clear technical documentation and maintain system design and support guides.</li>
</ul>
<ul>
<li>Develop and execute automated tests using approved frameworks; ensure production quality and reliability.</li>
</ul>
<ul>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
</ul>
<p>Qualifications &amp; Skills</p>
<ul>
<li>7+ years of professional experience as a server-side software engineer.</li>
</ul>
<ul>
<li>Deep understanding of equity derivatives products (options, volatility products, exotics) and their pricing and risk measures (e.g., Greeks, PnL attribution).</li>
</ul>
<ul>
<li>Strong experience with concurrent, multi-threaded, and low-latency application architectures.</li>
</ul>
<ul>
<li>Expertise in Object-Oriented design, design patterns, and best practices in unit and integration testing.</li>
</ul>
<ul>
<li>Experience with distributed caching and replication technologies.</li>
</ul>
<ul>
<li>Solid knowledge of Unix/Linux environments is required.</li>
</ul>
<ul>
<li>Experience with Agile/Scrum development methodologies is required.</li>
</ul>
<ul>
<li>Exposure to front-end/UI technologies (JavaScript, HTML5) is a plus.</li>
</ul>
<ul>
<li>Experience with cloud platforms and containerization (e.g., Docker, Kubernetes) is a plus.</li>
</ul>
<ul>
<li>B.S. in Computer Science, Mathematics, Physics, Financial Engineering, or related field.</li>
</ul>
<ul>
<li>Demonstrates thoroughness, attention to detail, and strong ownership of deliverables.</li>
</ul>
<ul>
<li>Effective team player with a strong willingness to collaborate and help others.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills; able to explain complex technical and quantitative topics to non-technical stakeholders.</li>
</ul>
<ul>
<li>Proven ability to write clear, concise documentation.</li>
</ul>
<ul>
<li>Fast learner with the ability to adapt to new technologies and business domains.</li>
</ul>
<ul>
<li>Able to perform under pressure, work with ambitious team members, and handle changing priorities.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>server-side software engineer, equity derivatives products, concurrent, multi-threaded, and low-latency application architectures, Object-Oriented design, Unix/Linux environments, Agile/Scrum development methodologies, cloud platforms and containerization, front-end/UI technologies, distributed caching and replication technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology organisation that designs and develops systems for equities volatility, risk, PnL, and market data.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954587117</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2995faa-123</externalid>
      <Title>Software Engineer – Equity Derivatives Pricing &amp; Risk System</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Java Developer with a strong background in Equity Derivatives to join our team in London.</p>
<p>In this role, you will play a pivotal part in building and enhancing Equity Volatility Risk and P&amp;L system that supports our Equity Volatility Managers.</p>
<p>This is an exciting opportunity to work in a fast-paced hedge fund environment, where your contributions will directly impact trading performance and risk management capabilities.</p>
<p>The ideal candidate will bring a combination of technical expertise and business domain knowledge for developing robust, scalable systems.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Design, develop, and implement a robust risk system for Equity Volatility trading strategies.</li>
<li>Build and maintain scalable, high-performance server-side application using Java and Spring Boot frameworks.</li>
<li>Build and integrate exotic pricing models to handle pricing and lifecycle of the product.</li>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
<li>Proactively address system bottlenecks and implement solutions to ensure the platform remains robust.</li>
<li>Conduct code reviews and implement automated testing to ensure the reliability and quality of the system.</li>
<li>Write clean, maintainable, and testable code, adhering to best practices in software engineering.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Proficiency in Java development with experience in building scalable, high-performance systems.</li>
<li>Strong knowledge of Spring Boot and its ecosystem for developing microservices.</li>
<li>Experience with Python for scripting and automation.</li>
<li>Experience in distributed caching technologies (e.g. Ignite, or similar).</li>
<li>Familiarity with containerization technologies (e.g. Podman, Kubernetes) and cloud computing platforms (e.g. AWS).</li>
<li>Solid understanding of software development best practices, including version control (e.g. Git), CI/CD pipelines, and automated testing frameworks.</li>
<li>Previous experience working with Equity Derivatives in a sell-side or buy-side firm.</li>
<li>Strong understanding of equity derivative products such as options and futures.</li>
<li>Some understanding of structured products in terms of pricing, lifecycle, and risk characteristics.</li>
<li>Strong problem-solving skills and the ability to work effectively in a fast-paced, high-pressure environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, Python, Distributed caching technologies, Containerization technologies, Cloud computing platforms, Version control, CI/CD pipelines, Automated testing frameworks, Equity Derivatives</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides software solutions for the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955392398</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34fa7d64-89a</externalid>
      <Title>Technical Product Manager - Linux Developer Experience</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Technical Product Manager to join our team responsible for shaping and evolving the developer experience on our firm&#39;s developer platform.</p>
<p>In this pivotal role, you&#39;ll serve as the primary liaison between the platform engineering team and our developer community , including quantitative analysts, researchers, and front-office trading teams , ensuring the platform meets their complex development needs and continuously improves.</p>
<p>The Developer Platform team architects, engineers, and enhances the firm&#39;s developer’s toolchain and workflow. We collaborate closely with developers, quants, researchers, and front-office trading teams to ensure our platform provides a best-in-class development experience with the feel of native Mac/UNIX-like development.</p>
<p>This role sits at the intersection of product management and technical enablement, acting as the voice of the developer within the platform team.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build and maintain relationships with technologists and developers across the firm to deeply understand their workflows, pain points, and emerging needs</li>
</ul>
<ul>
<li>Discover novel use cases and translate them into actionable product requirements for the platform engineering team</li>
</ul>
<ul>
<li>Serve as the first point of contact for developer questions about the platform&#39;s environment, tooling, and capabilities</li>
</ul>
<ul>
<li>Triage and reproduce issues reported by developers, driving initial diagnosis , including leveraging AI-assisted sessions for problem analysis , and escalating to the deeper technical engineering team when necessary</li>
</ul>
<ul>
<li>Drive the roadmap and prioritization of platform enhancements in collaboration with engineering leadership</li>
</ul>
<ul>
<li>Promote and evangelize the Linux developer platform , driving adoption and ensuring developers are aware of available features and best practices</li>
</ul>
<ul>
<li>Manage project timelines, stakeholder communication, and delivery milestones for platform initiatives</li>
</ul>
<p>Qualifications / Skills Required:</p>
<ul>
<li>Demonstrated experience in Technical Product Management, Technical Project Management, or Developer Relations/Developer Experience roles</li>
</ul>
<ul>
<li>Strong communication and stakeholder management skills , ability to engage credibly with both highly technical developers and senior leadership</li>
</ul>
<ul>
<li>Working familiarity with Linux desktop environments , comfortable navigating the platform, understanding developer workflows, and answering environment/tooling questions</li>
</ul>
<ul>
<li>Conceptual understanding of containerization and orchestration (Docker, Podman, Kubernetes) and how developers leverage these tools in their workflows</li>
</ul>
<ul>
<li>Familiarity with CI/CD concepts and tools (e.g., Jenkins, Git) , enough to understand developer pipelines and identify friction points</li>
</ul>
<ul>
<li>Problem reproduction and triage skills , ability to recreate reported issues in the environment and clearly document/escalate to engineering with relevant context</li>
</ul>
<ul>
<li>Experience leveraging AI tools (e.g., LLM-based assistants, copilots) to assist in problem diagnosis, research, and knowledge synthesis</li>
</ul>
<ul>
<li>Basic scripting literacy (Bash, Python) , enough to read, understand, and run existing scripts; not necessarily write complex automation from scratch</li>
</ul>
<p>Qualifications / Skills Desired:</p>
<ul>
<li>Familiarity with serverless compute concepts and cloud-native development paradigms</li>
</ul>
<ul>
<li>Exposure to configuration management tools (e.g., Ansible) and image lifecycle management (e.g., Hashicorp Packer) , understanding what they do and how they fit into the platform, rather than hands-on administration</li>
</ul>
<ul>
<li>Awareness of monitoring and observability tools (Prometheus, Grafana, ELK stack) from a user/consumer perspective</li>
</ul>
<ul>
<li>Understanding of authentication and identity management concepts (e.g., Active Directory integration) as they relate to developer access and workflows</li>
</ul>
<ul>
<li>Experience with agile project management methodologies and tools (Jira, Confluence, or similar)</li>
</ul>
<ul>
<li>Strong communication skills working with engineering leadership, developer community, and stakeholders</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science or a related field</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Technical Product Management, Technical Project Management, Developer Relations/Developer Experience, Linux desktop environments, Containerization and orchestration, CI/CD concepts and tools, Problem reproduction and triage skills, AI tools, Basic scripting literacy, Serverless compute concepts and cloud-native development paradigms, Configuration management tools, Image lifecycle management, Monitoring and observability tools, Authentication and identity management concepts, Agile project management methodologies and tools</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a company that provides infrastructure services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953932410</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77f9754-179</externalid>
      <Title>Java Algo Developer - EQ Trading Technology</Title>
      <Description><![CDATA[<p>We are seeking a skilled Java Algo Developer to join our high-performing algorithmic development team at EQ Trading Technology. As a Java Algo Developer, you will partner closely with fellow technologists, Execution Services, and Equity Finance team to enhance our execution offering to Portfolio Managers across various teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to develop and implement real-time algorithmic trading systems and execution platforms.</li>
<li>Design, build, and maintain high-quality software to meet product performance and quality expectations.</li>
<li>Stay current on state-of-the-art technologies and tools, including technical libraries, computing environments, and academic research.</li>
<li>Troubleshoot and resolve complex issues with our critical trading infrastructure.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong server-side Java knowledge, including Spring Boot framework.</li>
<li>Experience with financial order/execution data, positions data, and market data.</li>
<li>Knowledge of equities, options, SOR, VWAP, algorithmic trading platforms, or market microstructure.</li>
<li>High focus on testability of programs (TDD/XP-based development preferred).</li>
<li>Experience with proprietary Java frameworks and design patterns.</li>
<li>Good DevOps understanding to drive testing automation.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>5+ years of development experience in Algos or order management systems.</li>
<li>Good understanding of Asia equities markets, including auctions, microstructure, and regulatory constraints.</li>
<li>Experience with inventory optimization in developing markets in Asia (non-give up) highly desirable.</li>
<li>Good team player with excellent written and oral communication skills.</li>
<li>Quick thinker and problem solver, able to think on their feet and make informed decisions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>server-side Java, Spring Boot framework, financial order/execution data, positions data, market data, equities, options, SOR, VWAP, algorithmic trading platforms, market microstructure, testability of programs, proprietary Java frameworks, design patterns, DevOps, AI tools, cloud platform, containerization tools, Kdb+/Q, front-end development</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is an IT organisation. It provides technology solutions to various sectors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955637002</Applyto>
      <Location>Tokyo, Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>54fe5027-35b</externalid>
      <Title>Container Washer</Title>
      <Description><![CDATA[<p>Join our team at Bayer, where we&#39;re committed to overcoming the world&#39;s greatest challenges and contributing to a world where food and medical care are accessible to everyone.</p>
<p>As a Container Washer, you&#39;ll play a crucial role in ensuring the cleanliness and quality of our production processes.</p>
<p>Responsibilities:</p>
<ul>
<li>Clean containers, pallets, stainless steel vessels, and washing equipment according to standard operating procedures</li>
<li>Clean rooms and facilities according to standard operating procedures</li>
<li>Document cleaning activities according to standard operating procedures</li>
<li>Conduct inventory checks of cleaning materials after a set period</li>
<li>Operate computerized cleaning systems and monitor system status, using software from the production control system</li>
<li>Report any issues and assist in identifying problems</li>
<li>Transport equipment within the facility and operate the automated transportation system</li>
<li>Maintain safety standards and support improvements</li>
</ul>
<p>Requirements:</p>
<ul>
<li>6-12 months of work experience</li>
<li>Ideally, experience in pharmaceutical production</li>
<li>Knowledge of production processes</li>
<li>IT skills for using standard software</li>
<li>Strong quality awareness and technical understanding</li>
<li>Proactive, reliable, and team-oriented with high levels of personal responsibility</li>
<li>Excellent German language skills (written and spoken)</li>
<li>Willingness to work in shift operations</li>
</ul>
<p>What We Offer:</p>
<ul>
<li>Competitive salary of €3,307 per month (full-time) plus annual bonus, holiday pay, and Christmas bonus/13th month pay</li>
<li>Opportunities for professional development through access to learning platforms such as LinkedIn Learning and Education First</li>
<li>Support for health and well-being</li>
<li>Sustainable mobility options, including job ticket and leased company bicycles</li>
<li>Access to exclusive benefits and discounts from over 150 brands through our Corporate Benefits Program</li>
<li>Celebrating diversity in an inclusive work environment where you&#39;re welcome, supported, and encouraged to bring your whole self to work</li>
</ul>
<p>This is a 2-year fixed-term position with the possibility of extending.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>container cleaning, pallet cleaning, stainless steel vessel cleaning, washing equipment operation, computerized cleaning system operation, production control system software, inventory management, safety standards maintenance, quality awareness, technical understanding</Skills>
      <Category>Manufacturing</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops medicines and crop protection products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976635713</Applyto>
      <Location>Weimar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90b5ac1d-d16</externalid>
      <Title>Senior Software Engineer, Backend — Frontier Data</Title>
      <Description><![CDATA[<p>The Frontier Data team builds the data and systems that power Scale&#39;s most advanced Frontier AI use cases. We&#39;re looking for a Senior Backend Engineer who thrives in ambiguity, moves fast, and enjoys tackling daunting challenges.</p>
<p>As a Senior Backend Engineer, you will own major backend systems for frontier agentic data products, driving projects from early exploration through production deployment. You will build scalable services and pipelines that support agent workflows, architect modular, reusable backend systems, and operate in high-ambiguity environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building scalable systems while partnering closely with research, product, operations, and other engineering teams</li>
<li>Building scalable services and pipelines that support agent workflows</li>
<li>Architecting modular, reusable backend systems that adapt to evolving product needs</li>
<li>Operating in high-ambiguity environments and breaking down open-ended problems</li>
<li>Partnering cross-functionally with product, research/ML, and infrastructure teams</li>
</ul>
<p>Ideal experience includes 5+ years of full-time software engineering experience, strong backend engineering fundamentals, and experience building systems that scale.</p>
<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors.</p>
<p>Additional benefits include comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Distributed systems, API design, Data modeling, Production reliability, Docker, Containerized development/production environments, SQL, Modern database-backed application development, Async processing, Workflow engines, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Frontier Data</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Frontier Data develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4648525005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c19e39af-feb</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, experience working cross functionally with operations, experience building solutions with LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676602005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2d16873c-e17</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, cross functional collaboration, LLM solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676600005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bebb6dc-380</externalid>
      <Title>Staff Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>
<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>
<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>
<p>Impact and Responsibilities:</p>
<ul>
<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>
</ul>
<ul>
<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>
</ul>
<ul>
<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>
</ul>
<ul>
<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>
</ul>
<ul>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
</ul>
<ul>
<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>
</ul>
<ul>
<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>
</ul>
<ul>
<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>
</ul>
<ul>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>
</ul>
<ul>
<li>Experience scaling products at hyper-growth startups.</li>
</ul>
<ul>
<li>Excitement to work with AI technologies.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Software development, Distributed systems, Public cloud platforms, Containerization &amp; deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649893005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c64368dd-789</externalid>
      <Title>Software Engineer, ARC Team</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Software Engineer, ARC (Architecture, Reliability, &amp; Compute) to join our dynamic Public Sector Engineering team.</p>
<p>As a part of this team, you will define how the company ships software, establishing the patterns for deploying into complex government and high-security environments, rather than just running Terraform scripts.</p>
<p>You will build and maintain internal CLIs/tools that standardize testing, deployment, environment management and are tools that engineering relies on to prevent downstream breakages.</p>
<p>You will execute on automated deployment efforts to pay down tech debt, creating fully functional staging/testing environments, and defining the company&#39;s standard for safe deployments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement secure scalable backend systems for Public Sector customers, leveraging Scale&#39;s modern and cloud-native AI infrastructure.</li>
</ul>
<ul>
<li>Own services or systems and define their long-term health goals, while also improving the health of surrounding components.</li>
</ul>
<ul>
<li>Re-architect the stack to run in compliant or restrictive environments. This requires designing swappable components (auth, storage, logging) to meet government/security mandates without breaking the product.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
</ul>
<ul>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<ul>
<li>Contribute to the platform roadmap and product strategy for Scale AI&#39;s Public Sector business, playing a key role in shaping the future direction of our offerings.</li>
</ul>
<p>Must have:</p>
<ul>
<li>At least an active secret clearance and the ability &amp; willingness to up level to TS/SCI with CI Poly. This is a requirement and candidates will not be considered who do not hold at least a secret clearance</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Full Stack Development: Proficiency in both front-end and back-end development, including experience with modern web development frameworks, programming languages, and databases. Experience with developing &amp; delivering software to air-gapped &amp; isolated environments is a plus.</li>
</ul>
<ul>
<li>Cloud-Native Technologies: Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is desired. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment.</li>
</ul>
<ul>
<li>Security Focused: Experience with Federal Compliance frameworks, and requirements(e.g, Cloud SRG, FedRAMP, STIG Benchmarks, etc). Experience developing software &amp; technical solutions that meet strict security &amp; regulatory compliance requirements.</li>
</ul>
<ul>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<ul>
<li>Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment.</li>
</ul>
<ul>
<li>Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering.</li>
</ul>
<ul>
<li>Must be able to support work 3-4 days a week from the DC, SF, NYC, or STL office.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$138,000-$259,440 USD</Salaryrange>
      <Skills>Cloud-Native Technologies, Containerization, Container Orchestration, Cloud Platforms, Federal Compliance Frameworks, Security Focused, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673771005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43952002-812</externalid>
      <Title>Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>
<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>A track record of independent ownership of successful engineering projects.</li>
<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676936005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b255adba-bf4</externalid>
      <Title>Field Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Field Engineer to join our Public Sector team. As a Field Engineer, you will be on the front lines of our field engineering efforts for our federal AI projects, working closely with our largest public sector customers to ensure seamless and optimized experiences with Scale&#39;s technology.</p>
<p>Your primary responsibilities will include implementing end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back, and working closely with our customer&#39;s engineering teams to optimize data pipelines. You will also design, develop and maintain playbooks, internal tools, Scale&#39;s documentation and SDKs to quickly get customers set up for long-term success.</p>
<p>In addition, you will partner with Software Engineers and Operations to remove any technical hurdles customers may face, debug technical issues impacting delivery and own technical escalations coming from the customer. You will be accountable for the customer&#39;s technical experience throughout their time with Scale.</p>
<p>The ideal candidate will have a track record of success as a hybrid customer-facing engineer or similar function, wearing multiple hats along the way. Prior technical hands-on experience working with clients in a pre or post-sales capacity to realize business goals is also required.</p>
<p>We offer a competitive compensation package, including base salary, equity, and benefits. The base salary range for this full-time position is $190,000-$290,000 USD in San Francisco, New York, and Seattle, $170,000-$260,000 USD in Hawaii, Washington DC, Texas, and Colorado, and $140,000-$220,000 USD in St. Louis.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000-$290,000 USD in San Francisco, New York, and Seattle, $170,000-$260,000 USD in Hawaii, Washington DC, Texas, and Colorado, and $140,000-$220,000 USD in St. Louis</Salaryrange>
      <Skills>Python, JavaScript, API integrations, Large Language Models, 2D Image Annotation, Container orchestration with Kubernetes, Helm charts for application deployment, Ansible or similar tools for automation, Experience in AI, Experience working in classified environments, Previous experience as a technical go-to-market resource, Understanding of DevSecOps principles</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4518690005</Applyto>
      <Location>San Francisco, CA; New York, NY; Honolulu, Hawaii, St. Louis, MO; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a6557b2b-d24</externalid>
      <Title>Senior Platform Engineer II, Compute Services</Title>
      <Description><![CDATA[<p>We are seeking a Senior Platform Engineer to join our Kubernetes Infrastructure team. This role involves administering our critical multi-tenant Kubernetes platforms and collaborating with development teams to establish proper deployment architectures.</p>
<p>The ideal candidate will have a strong background in resilient kubernetes application architecture and deployment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Champion reliability initiatives for Kubernetes application deployments: Advocate for best practices to ensure high availability, scalability, and resilience of applications in Kubernetes, focusing on robust testing, secure pipelines, and efficient resource use.</li>
<li>Administer multi-tenant Kubernetes platforms: Manage complex multi-tenant Kubernetes clusters, configuring access, quotas, and security for isolation and optimal resource allocation while upholding SLAs.</li>
<li>Perform lifecycle and day 2 operations on clusters: Execute Kubernetes cluster lifecycle, including provisioning, patching, monitoring, backup, disaster recovery, and troubleshooting.</li>
<li>Deep dive into reliability issues: Conduct in-depth analysis and root cause identification for complex reliability incidents in Kubernetes, utilizing advanced debugging and monitoring tools to propose preventative measures.</li>
<li>Perform on-call duties: Respond to critical alerts and incidents outside business hours, providing timely resolution to minimize disruptions, collaborating with teams, and communicating clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s in CS, Engineering, or related field, or equivalent experience preferred.</li>
<li>CKA or similar certifications is highly desired.</li>
<li>5+ years administering multi-tenant SAAS Kubernetes (EKS, AKS, GKS).</li>
<li>Strong Gitops/Devops with Argocd or similar helm chart management.</li>
<li>Proven Docker and containerization experience.</li>
<li>Strong Linux OS experience.</li>
<li>Proficient in Go.</li>
<li>Excellent problem-solving, debugging, and analytical skills.</li>
<li>Strong communication and collaboration.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Benefits</strong></p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p><strong>Workplace</strong></p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Gitops/Devops, Argocd, Helm chart management, Docker, Containerization, Linux OS, Go, Problem-solving, Debugging, Analytical skills, Communication, Collaboration, CKA, Performance profiling, Optimization of distributed systems, Network protocols, Distributed consensus algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4607559006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5717691a-508</externalid>
      <Title>Staff Infrastructure Software Engineer, Enterprise AI</Title>
      <Description><![CDATA[<p>We are looking for a Staff Infrastructure Software Engineer to act as a primary technical lead, engineering the &#39;paved road&#39; for our knowledge retrieval and inference engines. You will define the deployment standards for Agentic workflows at scale, bridging the gap between complex AI orchestration and world-class infrastructure.</p>
<p>The ideal candidate thrives in a fast-paced environment, has a passion for both deep technical work and mentoring, and is capable of setting a long-term technical strategy for a critical domain while maintaining a strong, hands-on delivery focus.</p>
<p>You will architect and implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers.</li>
<li>Using our own data and AI platform to analyse build and test logs and metrics to identify areas for improvement.</li>
<li>Defining the architectural patterns for our multi-cloud infrastructure to support secure, reliable, and scalable Agentic workflows for enterprise customers.</li>
<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>
<li>Designing and championing highly scalable, reliable, and low-latency infrastructure and frameworks for building, orchestrating, and evaluating multi-agent systems at enterprise scale.</li>
<li>Leading the infrastructure roadmap with a strong focus on compliance, privacy, and security standards, including designing change management and data isolation strategies.</li>
<li>Owning the development and maintenance of our best-in-class Agentic observability platform (logging, metrics, tracing, and analytics) to proactively ensure system health and enable rapid incident response.</li>
<li>Driving developer efficiency by building automated tooling and championing Infrastructure-as-Code (IaC) paradigms throughout the engineering organization to improve workflows and operational efficiency.</li>
</ul>
<p>The ideal candidate has proven experience in a senior role, with 5+ years of full-time software engineering experience, and a deep understanding of modern infrastructure practices, including CI/CD, IaC (e.g., Terraform, Helm Charts), container orchestration (e.g., Kubernetes) and observability platforms (e.g., Datadog, Prometheus, Grafana).</p>
<p>Extensive experience with at least one major cloud provider (AWS, Azure, or GCP) and strong knowledge of security and compliance in enterprise environments, with a focus on access management, data isolation, and customer-specific VPC setups is required.</p>
<p>Proficiency in Python or JavaScript/TypeScript, and SQL is also necessary.</p>
<p>Bonus points for hands-on experience and a passion for working with Agents, LLMs, vector databases, and other emerging AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,200-$310,500 USD</Salaryrange>
      <Skills>Cloud computing, Infrastructure as Code, Container orchestration, Observability platforms, Security and compliance, Access management, Data isolation, Customer-specific VPC setups, Python, JavaScript/TypeScript, SQL, Agents, LLMs, Vector databases, Emerging AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4599700005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>82e9a289-022</externalid>
      <Title>Senior Software Engineer  - Application Traffic team</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer on the Application Traffic team, you will design and build the systems that power Databricks&#39; service-to-service communication across thousands of clusters in a multi-cloud environment. You will also help create abstractions that hide networking complexity from product teams, making connectivity, discovery, and reliability seamless by default.</p>
<p>You&#39;ll work across three key areas that define Databricks&#39; networking stack:</p>
<p>Ingress Control Plane: Build the control plane for Databricks&#39; global ingress layer. Enable programming of API gateways with static and dynamic endpoints, simplify service onboarding, and make it easy to expose APIs securely across clouds.</p>
<p>Service-to-Service Communication: Design scalable mechanisms for service discovery and load balancing across thousands of clusters. Provide networking abstractions so product teams don&#39;t need to worry about underlying connectivity details.</p>
<p>Overload Protection: Build intelligent rate limiting and admission control systems to protect critical services under high load. Ensure reliability and predictable performance for both customer-facing and internal workloads.</p>
<p>We&#39;re looking for someone with a strong proficiency in one or more languages such as Java, Scala, Go, or C++, and experience with service-oriented architectures and large scale distributed systems. Familiarity with cloud platforms (AWS, Azure, GCP) and container/orchestration technologies (Kubernetes, Docker) is also required. A track record of shipping infrastructure that supports mission-critical workloads at scale is essential.</p>
<p>The pay range for this role is $166,000-$225,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, service-oriented architectures, large scale distributed systems, cloud platforms, container/orchestration technologies, service discovery, DNS, load balancing, Envoy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and operates the world&apos;s best data and AI infrastructure platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8183195002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d935f4fa-322</externalid>
      <Title>Engineering Manager, Forward Deployed Engineering</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Engineering Manager, Forward Deployed Engineering</p>
<p><strong>Job Description</strong></p>
<p>We are seeking a commercially-minded engineering leader to lead our Forward Deployed Engineering (FDE) New Business team in EMEA. This role is pivotal in helping Intercom scale its AI-first platform to the world’s most complex organisations.</p>
<p><strong>Key Responsibilities</strong></p>
<p>As a hands-on leader, you will:</p>
<ul>
<li>Lead, coach, and nurture a high-performing FDE team while operating under pressure in high-stakes customer engagements.</li>
<li>Own end-to-end outcomes through clarity in communication, speed of execution, tight coordination, and technical quality.</li>
<li>Operate as a player-coach, actively engaging in strategic deals while developing team capabilities.</li>
<li>Lead discovery, design, and delivery of tailored technical solutions, including PoCs, evaluations and business value assessments.</li>
<li>Champion a customer-obsessed culture, noticing early indicators of success or failure in customer engagements and raising and correcting them with urgency.</li>
<li>Support opportunities with technical guidance, architecture, demos, and product evaluation support, as well as sales expertise.</li>
<li>Contribute to codifying successful deployments into reusable tools, playbooks, and inputs to the product roadmap, and create leverage for Intercom and our customers.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of technical experience in roles such as Software Engineer, Forward Deployed Engineer, Solutions Architect, Applied AI or related technical roles.</li>
<li>Having 2+ years of experience leading technical customer-facing teams, with a proven track record of mentoring and managing high-performing teams.</li>
<li>Strong technical judgment and the ability to coach engineers through complex architectural trade-offs.</li>
<li>Comfortable with a problem space that is ambiguous in nature, and capable of translating that ambiguity into clear signals for Product and Engineering and for positive customer outcomes.</li>
<li>Ability to flex working hours to partner with global teams.</li>
<li>Excellent communication and presentation skills.</li>
</ul>
<p><strong>Bonus Skills &amp; Attributes</strong></p>
<ul>
<li>Experience selling and deploying AI, data, or highly technical products in complex enterprise environments.</li>
<li>Curiosity and enthusiasm for AI, with a desire to learn how ML systems are developed and operated in production.</li>
<li>Experience hiring and managing high-performing teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>We are a well-treated bunch, with awesome benefits!</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
<li>Regular compensation reviews - we reward great work!</li>
<li>Pension scheme &amp; match up to 4%</li>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
<li>Flexible paid time off policy</li>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</li>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineer, Forward Deployed Engineer, Solutions Architect, Applied AI, Technical Leadership, AI, Data, Highly Technical Products, Cloud Computing, Containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7749413</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4cd384f-6ed</externalid>
      <Title>Senior Software Engineer, Release Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>
<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>
<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>
<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>
<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>
<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000 - $237,600</Salaryrange>
      <Skills>Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and makes applications more reliable.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1869fa15-51d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>
<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>
<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>
<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, containerization &amp; deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4594879005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d9b7d5ae-6bf</externalid>
      <Title>Software Engineer, Distributed Systems</Title>
      <Description><![CDATA[<p>We&#39;re growing our team of passionate creatives and builders on a mission to make design accessible to all. Our platform helps teams bring ideas to life,whether you&#39;re brainstorming, creating a prototype, translating designs into code, or iterating with AI. From idea to product, Figma empowers teams to streamline workflows, move faster, and work together in real time from anywhere in the world.</p>
<p>As a Software Engineer on our Infrastructure team, you’ll help design, build, and operate the systems that power our real-time collaborative design tools used by millions of people worldwide. We’re scaling fast, and we’re looking for experienced distributed systems engineers across a variety of teams. Whether you’re passionate about storage, compute orchestration, developer tooling, networking, or real-time data systems, this role offers an opportunity to shape the technical foundation of one of the most beloved design platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain scalable and reliable infrastructure systems that support product innovation and user collaboration at scale.</li>
</ul>
<ul>
<li>Architect and evolve distributed systems including storage platforms, streaming infrastructure, and compute orchestration.</li>
</ul>
<ul>
<li>Improve developer experience by building internal platforms, CI/CD systems, build tools, and APIs.</li>
</ul>
<ul>
<li>Collaborate across product and infrastructure teams to design secure, maintainable, and performant systems.</li>
</ul>
<ul>
<li>Participate in shaping platform strategy, roadmaps, and engineering best practices across the organization.</li>
</ul>
<ul>
<li>Debug and resolve complex production issues that span services and layers of the stack.</li>
</ul>
<ul>
<li>Mentor engineers and foster a culture of collaboration, inclusivity, and technical excellence.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of Software Engineering experience, specifically in backend or infrastructure engineering.</li>
</ul>
<ul>
<li>Deep understanding of distributed systems concepts such as sharding, replication, consistency, and eventual convergence.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (AWS, GCP, or Azure), infrastructure-as-code, and container orchestration.</li>
</ul>
<ul>
<li>Proficiency in languages such as Go, TypeScript, Python, Rust, or Ruby.</li>
</ul>
<ul>
<li>Strong system design skills and a track record of architecting resilient production systems.</li>
</ul>
<ul>
<li>Excellent communication skills, with experience collaborating across teams and mentoring others.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience scaling storage platforms (e.g., Postgres, Redis, S3, DynamoDB) or operating streaming systems like Kafka.</li>
</ul>
<ul>
<li>Background in traffic management, DDoS mitigation, or service mesh technologies (e.g., Envoy, Istio).</li>
</ul>
<ul>
<li>A history of developing complex, real-time distributed systems at scale.</li>
</ul>
<ul>
<li>A passion for building developer productivity tools, including development environments, CI/CD pipelines, and build systems.</li>
</ul>
<ul>
<li>Experience with evolving large-scale, shared developer platforms to improve reliability and developer velocity.</li>
</ul>
<ul>
<li>Strong problem-solving skills and a bias for action,especially when tackling high-impact, gritty challenges.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$153,000-$376,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native environments, infrastructure-as-code, container orchestration, Go, TypeScript, Python, Rust, Ruby, system design, resilient production systems, storage platforms, streaming infrastructure, compute orchestration, developer tooling, networking, real-time data systems, traffic management, DDoS mitigation, service mesh technologies, complex distributed systems, developer productivity tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design platform that helps teams bring ideas to life through real-time collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5552549004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ed46937-df6</externalid>
      <Title>Staff Developer Success Engineer - West</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Developer Success Engineer to join our team. As a frontline technical expert for our developer community, you will help users deploy and scale Temporal in cloud-native environments. You will also troubleshoot complex infrastructure issues, optimize performance, and develop automation solutions.</p>
<p>At Temporal, you&#39;ll work with cloud-native, highly scalable infrastructure spanning AWS, GCP, Kubernetes, and microservices. You&#39;ll gain deep expertise in container orchestration, networking, and observability while learning from complex, real-world customer use cases.</p>
<p>As a Staff Developer Success Engineer, you&#39;ll work directly with developers to debug complex infrastructure issues, optimize cloud performance, and enhance reliability for Temporal users. You&#39;ll develop observability solutions (Grafana, Prometheus), improve networking (load balancing, DNS, ingress/egress), and automate infrastructure operations (Terraform, IaC) to help customers run Temporal efficiently at scale.</p>
<p>Once ramped up, we expect you to independently drive technical solutions, whether debugging complex production issues or designing infrastructure best practices. Don&#39;t worry, we have seasoned engineers and mentors to support you along the way!</p>
<p>As a Staff Developer Success Engineer you will engage directly with developers, engineering teams, and product teams to understand infrastructure challenges and provide solutions that enhance scalability, performance, and reliability.</p>
<p>Your insights will influence platform improvements, from enhancing observability tooling to developing self-service infrastructure solutions that simplify troubleshooting (e.g., building diagnostic tools similar to Twilio’s Network Test).</p>
<p>You’ll serve as a bridge between developers and infrastructure, ensuring that reliability, performance, and developer experience remain top priorities as Temporal scales.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>cloud-native infrastructure, container orchestration, networking, observability, infrastructure automation, Terraform, IaC, Kubernetes, AWS, GCP, Python, Java, Go, Grafana, Prometheus, security certificate management, security implementation, use case analysis, Temporal design decisions, architecture best practices, EKS, GKE, OpenTracing, Ansible, CDK</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5076742007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe04c8cc-782</externalid>
      <Title>Forward Deployed Engineering Manager</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>
<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>
<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>
<p>Why Join Us</p>
<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>
<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>
<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>
<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>
<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>
<p>The role</p>
<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>
<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>
<p>What You’ll Do</p>
<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>
<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>
<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>
<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>
<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>
<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>
<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>
<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>
<p>What We’re Looking For</p>
<p>5+ years of software engineering experience (Python)</p>
<p>2+ years of experience managing or leading engineers in fast-paced environments</p>
<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>
<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>
<p>Background in infrastructure, developer tooling, or distributed systems</p>
<p>Strong debugging skills and systems thinking across layered, containerized environments</p>
<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>
<p>Excellent communication and stakeholder management skills</p>
<p>Preferred</p>
<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>
<p>Familiarity with cloud infrastructure (GCP or AWS)</p>
<p>Prior experience in AI/ML platforms, data companies, or research environments</p>
<p>Contributions to open-source projects in RL, agents, or developer tooling</p>
<p>Why This Role Matters</p>
<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>
<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>
<p>About Alignerr</p>
<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>
<p>Life at Labelbox</p>
<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>
<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>
<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>
<p>Growth: Career advancement opportunities directly tied to your impact</p>
<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>
<p>Our Vision</p>
<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>
<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>
<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$220,000 USD</Salaryrange>
      <Skills>Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a data-centric AI development company that provides critical infrastructure for breakthrough AI models.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5101195007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e948a283-667</externalid>
      <Title>Staff Software Engineer, Platform Security</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Platform Security Engineering team. As a key member of this team, you will be responsible for advancing our mission through security expertise, software development, and operational excellence.</p>
<p>In this technical leadership role, you will articulate and pursue the most leveraged opportunities to reduce security risk across Engineering, designing and building lovable &#39;paved paths&#39; for managing identities and access, shipping code, configuring cloud infrastructure, and operating services.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and applying best-in-class secure baselines for cloud infrastructure</li>
<li>Securing first- and third-party software supply chains, from the dev environment through CI/CD and into production</li>
<li>Building and owning identity and access management (IAM) systems that are user-friendly and promote least privilege</li>
<li>Managing infrastructure vulnerabilities while supporting rapid growth for Engineering</li>
<li>Consulting on risk assessments, architectural designs, threat models, code reviews, and more,pragmatically balancing security with other business considerations</li>
</ul>
<p>Example projects include:</p>
<ul>
<li>Supporting IAM with scalable platform solutions</li>
<li>Building tooling to prevent and address vulnerabilities across our infrastructure</li>
<li>Integrating service-to-service authentication and authorization into Discord&#39;s internal developer platform</li>
</ul>
<p>What we look for in a candidate includes:</p>
<ul>
<li>5+ years of experience building and operating production systems or infrastructure</li>
<li>5+ years of experience writing software in a general-purpose programming language</li>
<li>4+ years of experience securing systems with millions of users</li>
<li>Experience mentoring junior ICs and leading technical projects involving multiple engineers and spanning multiple quarters</li>
<li>Experience designing and building software for customers (internal or external) beyond your immediate team</li>
<li>Experience securing cloud environments</li>
<li>Experience defining and orchestrating containers</li>
<li>Familiarity with build and CI/CD technologies</li>
<li>Understanding of modern authentication and authorization concepts</li>
</ul>
<p>Bonus points if you have experience developing and debugging distributed systems atop GCP and Cloudflare, leading complex migrations or risk management programs across an engineering organization, or managing and securing VMs or bare-metal hosts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$248,000 to $279,000 + equity + benefits</Salaryrange>
      <Skills>cloud infrastructure, identity and access management, software development, operational excellence, security expertise, container orchestration, build and CI/CD technologies, modern authentication and authorization concepts, distributed systems, GCP and Cloudflare</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8177912002</Applyto>
      <Location>San Francisco Bay Area or Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>24176cb8-311</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>
<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>
<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>
<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>
<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d4292d1-227</externalid>
      <Title>Software Engineer, Sandboxing (Systems)</Title>
      <Description><![CDATA[<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>
<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>
<p>Responsibilities:</p>
<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>
<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>
<p>Investigate and resolve performance bottlenecks in virtualized environments</p>
<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>
<p>Develop tooling for monitoring and improving virtualization performance</p>
<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>
<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>
<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>
<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>
<p>You may be a good fit if you:</p>
<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>
<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>
<p>Have experience optimizing system performance for compute-intensive workloads</p>
<p>Are familiar with modern CPU architectures and memory systems</p>
<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>
<p>Understand Linux resource management, scheduling, and memory management</p>
<p>Have experience profiling and debugging system-level performance issues</p>
<p>Are comfortable diving into unfamiliar codebases and technical domains</p>
<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>
<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>
<p>Strong candidates may also have experience with:</p>
<p>GPU virtualization and acceleration technologies</p>
<p>Cloud infrastructure at scale (AWS, GCP)</p>
<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>
<p>eBPF programming and kernel tracing tools</p>
<p>OS-level security hardening and isolation techniques</p>
<p>Developing custom scheduling algorithms for specialized workloads</p>
<p>Performance optimization for ML/AI specific workloads</p>
<p>Network stack optimization and high-performance networking</p>
<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>
<p>Representative projects:</p>
<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>
<p>Implementing custom memory management schemes for large-scale distributed training</p>
<p>Developing specialized I/O schedulers to prioritize ML workloads</p>
<p>Creating lightweight virtualization solutions tailored for AI inference</p>
<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>
<p>Enhancing communication between VMs for distributed training workloads</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5025591008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6b8d0e9-04e</externalid>
      <Title>Salesforce Manager, CRM Systems</Title>
      <Description><![CDATA[<p>As a Salesforce Engineering Manager at GitLab, you will lead the architectural vision and technical roadmap for our Salesforce platform and integrated go-to-market applications. You&#39;ll manage and mentor a team of Salesforce engineers while partnering closely with stakeholders across Sales, Marketing, Customer Experience, and Operations to translate business needs into a prioritized, high-impact engineering backlog.</p>
<p>A key part of this role is balancing long-term platform health with near-term business needs, while driving operational excellence through strong sprint management, clear delivery expectations, and continuous improvement. You&#39;ll also champion the integration of AI-native solutions across our operations and go-to-market systems and within team workflows, helping GitLab scale efficiently.</p>
<p>This role includes leading large, complex programs that drive business transformation, ensuring our platform remains scalable, secure, and compliant as we grow. Some examples of our projects:</p>
<ul>
<li>Building and evolving a scalable Salesforce architecture across integrated go-to-market applications</li>
<li>Advancing Salesforce DevOps practices (source control, continuous integration, and release management) and platform governance</li>
<li>Designing and delivering advanced Salesforce solutions and integrations with other critical business systems</li>
<li>Introducing AI-native capabilities and automation to improve system workflows and team productivity</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a team of Salesforce engineers, supporting career growth through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision and technical roadmap for GitLab&#39;s Salesforce platform and integrated go-to-market applications, with a focus on scalability, performance, security, and compliance.</li>
<li>Champion the integration of AI-native solutions within operations and go-to-market systems and within engineering workflows to improve efficiency and unlock new capabilities.</li>
<li>Partner with cross-functional stakeholders (Sales, Marketing, Customer Experience, and Operations) to translate business needs into a prioritized engineering backlog and delivery plan.</li>
<li>Provide technical leadership on complex challenges by contributing to solution design, reviewing code, and guiding implementation across the Salesforce ecosystem.</li>
<li>Own operational excellence for the team, including sprint planning, capacity management, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices, including source control, continuous integration and continuous deployment, release management, code quality, and platform governance.</li>
<li>Lead large-scale programs and integrations across Salesforce and other key business systems, introducing automation and process improvements to help GitLab scale.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of progressive experience in Salesforce development and architecture, building scalable solutions that support go-to-market systems.</li>
<li>2+ years of experience managing or leading technical teams, with a track record of coaching, giving actionable feedback, and growing team members.</li>
<li>Strong proficiency with Salesforce technologies including Apex, Lightning Web Components, Visualforce, and SOQL, and the ability to guide design and code review decisions.</li>
<li>Strong command of Salesforce DevOps practices, including Git-based source control, continuous integration and continuous delivery (CI/CD), and reliable release management.</li>
<li>Experience designing and overseeing integrations between Salesforce and other business systems, including using integration platform as a service (iPaaS) tools and automation solutions.</li>
<li>Ability to translate stakeholder needs into a prioritized engineering backlog, balancing long-term platform health with near-term business outcomes.</li>
<li>Excellent communication and relationship-building skills, with the ability to explain technical concepts clearly to non-technical partners across Sales, Marketing, Customer Experience, and Operations.</li>
<li>Comfort working in a remote, asynchronous environment, with a passion for using AI-native solutions to improve team productivity and the systems you build.</li>
</ul>
<p>About the team: The Salesforce Engineering Manager is part of the Enterprise Applications team, which is responsible for GitLab&#39;s critical business applications, including Salesforce, ServiceNow, Zuora, NetSuite, and more. This team helps GitLab scale by delivering new capabilities while maintaining a reliable, secure, and compliant production environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, Apex, Lightning Web Components, Visualforce, SOQL, Git-based source control, Continuous integration and continuous delivery (CI/CD), Release management, Integration platform as a service (iPaaS) tools, Automation solutions, AI-native solutions, DevOps practices, Cloud computing, Containerization, Microservices architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a company that provides an intelligent orchestration platform for DevSecOps. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8184975002</Applyto>
      <Location>Remote, Bangalore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7520a7f6-8b6</externalid>
      <Title>Member of Technical Staff - Infrastructure Reliability</Title>
      <Description><![CDATA[<p>We are seeking a Member of Technical Staff - Infrastructure Reliability to join our team. As a key member of our infrastructure team, you will own the availability, performance, and evolution of our core compute, storage, and networking infrastructure. This is a joint xAI/X role: you will own 24×7 reliability for the world&#39;s largest GPU training superclusters and one of the highest-QPS production systems on the planet.</p>
<p>You will define and execute the technical strategy for infrastructure reliability and scalability, build and maintain the automation, observability, and control planes that keep multi-datacenter, hybrid cloud/on-prem environments healthy, lead incident response, deep-dive root cause analysis, and post-mortems that drive real fixes, identify, instrument, and eliminate systemic failure patterns, design and implement high-leverage systems software in Python and Rust, and push the state of the art in large-scale GPU cluster operations and AI workload reliability.</p>
<p>To succeed in this role, you will need 5+ years shipping production software and/or operating distributed infrastructure at scale, expert-level knowledge of Linux systems, TCP/IP networking, and systems programming, strong coding skills with proven production experience in Rust (strongly preferred) and at least one of Python, Go, or C++, deep experience with large-scale distributed systems in on-prem and cloud environments, hands-on expertise with container orchestration, container runtimes, and infrastructure-as-code, intimate understanding of common failure modes in distributed systems and how to mitigate them, and a track record of participating in (or building) effective on-call rotations in high-stakes environments.</p>
<p>In addition to a competitive base salary, you will receive equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $400,000 USD</Salaryrange>
      <Skills>Linux systems, TCP/IP networking, systems programming, Rust, Python, Go, C++, container orchestration, container runtimes, infrastructure-as-code, high-performance networking, low level configuration, deployment, support, monitoring, administration, troubleshooting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4801451007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3917fb4f-2ab</externalid>
      <Title>Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented full stack software engineer to join our growing team at Anduril Labs in Washington, DC.</p>
<p>As a full stack software engineer in Anduril Labs, you will help bring innovative, next-generation concepts to life through proof-of-concept development and rapid prototyping using bleeding edge technologies.</p>
<p>The ideal candidate has exceptional software development and creative problem-solving skills, is a self-starter, and can quickly grasp complex concepts.</p>
<p>As a full stack software engineer, you possess the skills to architect, develop, and deploy distributed applications and services, including both front-end and back-end components.</p>
<p>You have experience with agile, end-to-end software development lifecycle and are comfortable developing and deploying code across Windows and Linux-based systems (including standalone bare-metal hardware, virtualized environments, and cloud-hosted platforms).</p>
<p>Embedded software development experience is a plus.</p>
<p>You are also proficient in integrating legacy code and systems, leveraging open-source technologies, and developing and utilizing APIs.</p>
<p>Additionally, you have a solid understanding of AI/ML core concepts (e.g., feature extraction, supervised vs. unsupervised learning, regression, classification, clustering, deep learning neural networks, NLP, LLMs, SLMs, model fine-tuning, prompt engineering, RAG) and hands-on experience developing (Gen)AI-enhanced applications or services.</p>
<p>We also expect candidates to have familiarity with database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB) and experience with data modeling, data wrangling, analytics, and visualization.</p>
<p>Since Anduril Labs supports all Anduril businesses and product lines, you will have the unique opportunity to work closely with multi-disciplinary engineering and product development teams across the entire company.</p>
<p>This means you will get to directly contribute to the development of Anduril’s next-generation products and services.</p>
<p>So if you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the development of prototypes to demonstrate advanced concepts in areas like autonomous and multi-agent systems, GenAI, advanced data analytics, quantum computing/sensing/networking/comms/machine learning, modeling, simulation, optimization, visualization, next-gen human-machine interfaces, heterogenous computing, and cybersecurity.</li>
</ul>
<ul>
<li>Own the entire Software Development Lifecycle from inception through development, testing, deployment, and documentation for Anduril Labs-developed software prototypes.</li>
</ul>
<ul>
<li>Interface and collaborate with other Anduril and customer engineering teams, and strategic partners.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts.</li>
</ul>
<ul>
<li>Participate in field experiments and technology demonstrations.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of programming with Python, C++, Java, Rust, Go, or JavaScript/TypeScript.</li>
</ul>
<ul>
<li>Proven software architecture and design skills.</li>
</ul>
<ul>
<li>Ability to quickly understand and navigate complex systems and established codebases.</li>
</ul>
<ul>
<li>AI/ML development using commercial and open-source AI frameworks, models, and tools (e.g., Jupyter Notebook, PyTorch, TensorFlow, Scikit-learn, OpenAI, Claude, Gemini, Llama, LangChain, YOLO, AWS Sagemaker, Bedrock, Azure AI, RAG).</li>
</ul>
<ul>
<li>Web app development (e.g., React, Angular, or Vue).</li>
</ul>
<ul>
<li>Cloud development (e.g., AWS, Azure, or GCP).</li>
</ul>
<ul>
<li>Data modeling and wrangling.</li>
</ul>
<ul>
<li>Networking basics (e.g., DNS, TCP/IP vs. UDP, socket communications, LDAP, Active Directory).</li>
</ul>
<ul>
<li>Database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB).</li>
</ul>
<ul>
<li>API development and integration (e.g., REST, GraphQL).</li>
</ul>
<ul>
<li>Containerization technologies (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Software development on Linux and Windows.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>BS in Computer Science, Engineering, or similar field.</li>
</ul>
<ul>
<li>Distributed applications development (e.g., client/server, microservices, multi-agent solutions).</li>
</ul>
<ul>
<li>High performance computing (HPC) and big data technologies (e.g., Apache Spark, Hadoop).</li>
</ul>
<ul>
<li>Mobile app development (e.g., iOS or Android).</li>
</ul>
<ul>
<li>Embedded software development experience.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10% US</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, C++, Java, Rust, Go, JavaScript/TypeScript, Software Architecture, AI/ML, Web App Development, Cloud Development, Data Modeling, Networking, Database Technologies, API Development, Containerization, Git Version Control, Build Tools, CI/CD Pipelines, Unit Testing, Integration Testing, End-to-End Testing, Distributed Applications Development, High Performance Computing, Big Data Technologies, Mobile App Development, Embedded Software Development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089044007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f3a04da-d45</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are looking for software engineers to join our Platform organisation. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and optimise the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines.</li>
<li>Partner closely with product teams to understand their development workflow and eliminate friction points.</li>
<li>Work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don&#39;t.</li>
</ul>
<p><strong>Platforms</strong></p>
<ul>
<li>Platform Acceleration: We work on maximising the developer productivity of product engineers at Anthropic.</li>
<li>Service Infra: We build and maintain the core infrastructure that powers Anthropic&#39;s engineering organisation, from service mesh and observability systems to deployment pipelines and shared libraries.</li>
<li>Multicloud: We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers.</li>
<li>Auth &amp; Identity: We build and maintain the critical infrastructure that powers identity and authentication across Anthropic&#39;s product suite.</li>
<li>Connectivity: Our mission is to make Claude the most connected AI.</li>
<li>API Distributability: The Claude API today is a rapidly growing platform serving developers and enterprises at scale.</li>
<li>Platform Intelligence: We build the training systems that adapt Claude to specific customer workloads.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have a minimum of 5 years of practical experience building backend product or platform systems,distributed systems, cloud-native products, developer tools, or external developer facing products.</li>
<li>Have strong fundamentals in service-oriented architectures, networking, and systems design.</li>
<li>Are proficient in Python, Go, Rust, or similar systems languages.</li>
<li>Have experience with cloud infrastructure (GCP, AWS, or Azure), container orchestration (Kubernetes), and/or multi-cloud networking.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual compensation range: $320,000-\$320,000 USD.</li>
<li>Visa sponsorship available.</li>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-\$320,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, Cloud infrastructure, Container orchestration, Multi-cloud networking, Service-oriented architectures, Networking, Systems design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157844008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae849446-fe5</externalid>
      <Title>Site Reliability Engineer - Cybersecurity</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Cybersecurity / SRE team at xAI is focused on ensuring the security and reliability of X Money. This role will primarily focus on the X Money platform but will also cross over with the X Social platform.</p>
<p>You&#39;ll be responsible for securing and maintaining the reliability of X Money&#39;s infrastructure. You&#39;ll work closely with cross-functional teams to enhance security measures, improve system resilience, and implement best practices.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and secure mission-critical applications in a hybrid cloud environment.</li>
<li>Manage identities and roles effectively.</li>
<li>Monitor and remediate infrastructure to comply with regulations and best practices (e.g., PCI, NIST CSF).</li>
<li>Maintain a SIEM and all data pipelines needed for reliable alerting.</li>
<li>Design and implement secure container standards and automation to enable frictionless developer workflows.</li>
<li>Maintain Kubernetes security aligned with current best practices.</li>
<li>Build, deploy, and maintain security operations infrastructure using Python, Terraform, and Puppet.</li>
<li>Secure and enhance CI/CD pipelines.</li>
<li>Integrate and maintain code scanning platforms.</li>
<li>Develop dashboards and alerts from security metrics.</li>
<li>Own security projects: identify issues and implement solutions.</li>
<li>Apply critical analysis and problem-solving skills.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven experience securing hybrid AWS/on-premises environments, including IAM and overall security posture.</li>
<li>Strong proficiency in Python, Terraform, and Puppet.</li>
<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>
<li>Deep expertise in Kubernetes and container security.</li>
<li>Hands-on expertise building GitHub Actions and workflows.</li>
<li>Extensive experience with Prometheus, Grafana, CloudWatch, and Karma.</li>
<li>Well versed in management and integrations of Wazuh</li>
<li>Hands-on experience with security scanning tools (Semgrep, Trivy, Falco).</li>
<li>Proactive mindset with strong ownership and problem-solving skills.</li>
<li>Excellent critical thinking and analytical abilities.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Terraform, Puppet, Kubernetes, container security, GitHub Actions, Prometheus, Grafana, CloudWatch, Karma, Wazuh, security scanning tools, critical analysis, problem-solving skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803447007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>491db8e9-776</externalid>
      <Title>Staff Site Reliability Engineer- Splunk Expert</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Staff Site Reliability Engineer with deep expertise in Splunk and Grafana to own and evolve our observability ecosystem.</p>
<p>As a Staff Site Reliability Engineer, you will move beyond simple monitoring to architect a comprehensive, scalable telemetry platform. You will be our subject-matter expert in Splunk optimisation, ensuring our logging architecture is performant, cost-effective, and deeply integrated with our automated workflows.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Splunk Architecture &amp; Optimisation: Lead the design and tuning of Splunk environments. Optimise indexer performance, search efficiency, and data models to ensure rapid troubleshooting and cost-efficiency.</li>
</ul>
<ul>
<li>Advanced Visualisation: Architect and maintain sophisticated Grafana dashboards that correlate disparate data sources into a single pane of glass for real-time system health.</li>
</ul>
<ul>
<li>Automated Infrastructure: Design, build, and maintain scalable observability infrastructure using tools like Terraform.</li>
</ul>
<ul>
<li>Pipeline Engineering: Optimise the collection, processing, and storage of telemetry data (Metrics, Logs, Traces) to ensure high reliability and low latency.</li>
</ul>
<ul>
<li>Workflow Automation: Develop custom Splunk workflows and integrations that trigger automated responses to system events, reducing Mean Time to Resolution (MTTR).</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and lead post-incident reviews to drive systemic improvements through &#39;observability-driven development.&#39;</li>
</ul>
<p>Required skills and experience include:</p>
<ul>
<li>Splunk Mastery: Deep, hands-on experience with Splunk administration, search optimisation (SPL), and architecting complex data pipelines.</li>
</ul>
<ul>
<li>Grafana Expertise: Proven ability to build actionable, intuitive dashboards in Grafana that go beyond simple charts to provide deep operational insights.</li>
</ul>
<ul>
<li>SRE Mindset: Minimum 8+ years of experience in an SRE, DevOps, or Systems Engineering role with a focus on high-availability systems.</li>
</ul>
<ul>
<li>Programming Proficiency: Strong coding skills in Go, Python, or Ruby for building internal tools and automating observability workflows.</li>
</ul>
<ul>
<li>Telemetry Standards: Hands-on experience with OpenTelemetry (OTel), Prometheus, or similar frameworks for instrumenting applications.</li>
</ul>
<ul>
<li>Distributed Systems: Deep understanding of Linux internals, networking (TCP/IP, DNS, Load Balancing), and container orchestration (Kubernetes/EKS).</li>
</ul>
<p>Bonus skills include:</p>
<ul>
<li>Tracing: Implementation of distributed tracing (Jaeger, Tempo, or Honeycomb) to visualise request flow across microservices.</li>
</ul>
<ul>
<li>Security Observability: Experience using Splunk for security orchestration (SOAR) or SIEM-related workflows.</li>
</ul>
<ul>
<li>Cloud Platforms: Experience managing observability native tools within AWS, Azure, or GCP.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Splunk, Grafana, SRE, Go, Python, Ruby, OpenTelemetry, Prometheus, Linux, Networking, Container Orchestration, Tracing, Security Observability, Cloud Platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a publicly traded software company that specialises in identity and access management.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6874616</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0282a9-9ee</externalid>
      <Title>Staff Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>
<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>
<li>Develop and refine monitoring and alerting to enhance system reliability.</li>
<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>
<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>
<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>
<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>
<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>
<li>Proven track record of leading incident management and post-mortem analysis.</li>
<li>Excellent problem-solving, analytical, and communication skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience running and scaling observability tools as a cloud provider.</li>
<li>Experience administering large-scale kubernetes clusters.</li>
<li>Deep understanding of data-streaming systems.</li>
</ul>
<p>The base salary range for this role is $188,000 to $250,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider for AI, founded in 2017 and listed on Nasdaq since March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577361006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a442cd76-850</externalid>
      <Title>Virtual Solutions Engineer, Lisbon</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today, the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We&#39;re not looking for people who wait for a polished roadmap; we&#39;re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with. We value candidates who have the instinct to spot a &#39;normalized&#39; problem and the AI-native curiosity to create a solution using the latest tools.</p>
<p>Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.</p>
<p>If you&#39;re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you&#39;ll fit right in.</p>
<p>Note: This role is based in Lisbon.</p>
<p>About the team</p>
<p>The Pre-Sales Solutions Engineering organization is responsible for the technical sale of the Cloudflare solution portfolio, ensuring maximum business value, fit-for-purpose solution design and an adoption roadmap for our customers. As a Solutions Engineer, you are the technical customer advocate within Cloudflare. To aid your customers, you will work closely with every team at Cloudflare, from Sales and Product to Engineering and Customer Success.</p>
<p>Your goal should drive you through the entire organization as you seek out and create solutions for your customer&#39;s needs.</p>
<p>Virtual Solution Engineers (VSE) are a specialized part of this team, engaging directly with Small and Medium Business (SMB) customers across the Europe, Middle East and Africa (EMEA) region. They deliver product demonstrations, conduct discovery sessions, build technical alignment, and ensure customers understand how Cloudflare can solve their challenges at scale.</p>
<p>VSEs work primarily through digital channels, collaborating closely with Account Executives (AE) across multiple markets to engage with new prospects and help existing customers move forward in their journey with Cloudflare.</p>
<p>Ultimately, we&#39;re committed to accelerating sales cycles and increasing win rates while increasing productivity and efficiency, through standardization and automation.</p>
<p>Who are we looking for?</p>
<p>Our Virtual Solution Engineers come from a wide range of backgrounds. We&#39;re serious about building a diverse team. When hiring we look for experience combined with genuine curiosity for our technology and ambition to be as diligent and helpful as possible in supporting our customers and partners to achieve their goals.</p>
<p>The range of products and solutions offered by Cloudflare is broad so that we are able to meet our lofty goal of helping to build a better Internet. A broad knowledge of Internet performance, networking and security technology is required.</p>
<p>The curiosity to maintain and develop new knowledge is essential to keeping up with the high rate of product innovation at Cloudflare.</p>
<p>Ultimately, you are passionate about technology and have the ability to explain complex technical concepts in easy-to-understand terms. You are naturally curious, and an avid builder who is not afraid to be hands on.</p>
<p>Role Responsibilities</p>
<p>Connecting with multiple stakeholders within Cloudflare and utilizing a variety of tools, your role will be to support colleagues, customers and partners throughout the sales process by:</p>
<ul>
<li>Performing research and analysis on current and prospective customers&#39; business and product usage;</li>
</ul>
<ul>
<li>Leading technical discovery to understand customer requirements and challenges;</li>
</ul>
<ul>
<li>Building and delivering product demonstrations to prospective customers;</li>
</ul>
<ul>
<li>Owning technical validation activities such as Proof of Concepts, Request for Proposals and Solution Design;</li>
</ul>
<ul>
<li>Translating complex technical capabilities into clear outcome-driven solutions.</li>
</ul>
<ul>
<li>Staying on top of Cloudflare&#39;s new products, Internet technologies, and the competitive landscape.</li>
</ul>
<p>What Makes This Role Exciting</p>
<ul>
<li>Regional Impact: You&#39;ll work with a diverse range of customers across EMEA, adapting to different markets, industries, and digital maturity levels;</li>
</ul>
<ul>
<li>Breadth of technology: You&#39;ll cover Cloudflare&#39;s full platform: Application Services, Networking, and Developer Platform;</li>
</ul>
<ul>
<li>Ownership: Build technical confidence throughout the sales cycle;</li>
</ul>
<ul>
<li>Collaboration: You&#39;ll work hand-in-hand with teams such as Sales, Solutions Engineering, Marketing, Product, and Customer Success to help new audiences discover Cloudflare and to guide customers through evaluation and adoption;</li>
</ul>
<ul>
<li>Innovation and scale: You&#39;ll contribute to campaigns, digital events, and scalable technical content that expand Cloudflare&#39;s reach and help more organizations benefit from our platform.</li>
</ul>
<p>Examples of desirable skills, knowledge, and experience:</p>
<ul>
<li>Graduates of technical, computer science, engineering or other relevant degrees;</li>
</ul>
<ul>
<li>1-5 years of professional experience, ideally in technical presales, solutions engineering, consulting, or related roles;</li>
</ul>
<ul>
<li>Strong knowledge of Internet fundamentals (HTTP/S, DNS, TLS, networking, APIs).</li>
</ul>
<p>In other words, a solid understanding of &#39;how the Internet works&#39;;</p>
<ul>
<li>Programming and application development knowledge. Python, JavaScript, Bash experience is preferred.</li>
</ul>
<ul>
<li>Excellent communication skills and the ability to present complex concepts clearly and confidently in front of an audience.</li>
</ul>
<ul>
<li>Comfortable working across multiple markets and time zones in the EMEA region.</li>
</ul>
<ul>
<li>Fluency (written &amp; spoken) in English AND Arabic</li>
</ul>
<p>Bonus!</p>
<ul>
<li>Previous experience in a customer-facing consultative or support role.</li>
</ul>
<ul>
<li>Understanding of how customers make buying decisions and how to explain Return On Investment.</li>
</ul>
<ul>
<li>Knowledge of security products such as Bot Management and Web Application Firewalls (WAF).</li>
</ul>
<ul>
<li>Exposure to emerging technical landscape trends, e.g. cloud security platforms, SASE and Zero Trust.</li>
</ul>
<ul>
<li>Hands-on knowledge of cloud providers (e.g. AWS, Azure, GCP) and modern app architectures (serverless, containers, microservices).</li>
</ul>
<ul>
<li>Understanding of common application security risks (e.g., CSRF, XSS, SQLi) and mitigation strategies.</li>
</ul>
<ul>
<li>Experience with regulatory or compliance frameworks (SOC-2, PCI DSS, HIPAA, GDPR).</li>
</ul>
<ul>
<li>Track record of building reusable assets; demo environments, reference architectures, or presales tooling.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government entities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Internet fundamentals, Networking, Security technology, Programming and application development knowledge, Python, JavaScript, Bash, APIs, Cloud providers, Modern app architectures, Serverless, Containers, Microservices, Cloud security platforms, SASE, Zero Trust, Graduates of technical, computer science, engineering or other relevant degrees, 1-5 years of professional experience, Strong knowledge of Internet fundamentals, Excellent communication skills, Fluency in English and Arabic, Previous experience in a customer-facing consultative or support role, Understanding of how customers make buying decisions, Knowledge of security products, Exposure to emerging technical landscape trends, Hands-on knowledge of cloud providers, Understanding of common application security risks, Experience with regulatory or compliance frameworks, Track record of building reusable assets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks, powering millions of websites and Internet properties for customers ranging from individual bloggers to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6934200</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7ad63033-e7e</externalid>
      <Title>Senior Security Engineer I, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Senior Security Engineer I to join our Vulnerability Management team. This is an execution-focused role where you will perform hands-on triage, drive remediation follow-through, and improve day-to-day operational quality across cloud and specialized infrastructure environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Performing hands-on vulnerability triage and risk assessment using team-defined standards and playbooks</li>
<li>Tracking remediation progress with owner teams, escalating blockers, and ensuring clean issue closure</li>
<li>Supporting automated triage workflows by validating outputs and improving signal quality</li>
<li>Contributing to automated remediation campaigns (e.g., EOL cleanup, vulnerable software upgrades, and fix verification)</li>
<li>Supporting zero-day and embargo response by helping inventory affected assets and tracking owner-team deployment status</li>
<li>Participating in incident investigations by gathering technical evidence and supporting impact analysis</li>
<li>Participating in on-call rotation for critical vulnerability events</li>
<li>Maintaining high-quality documentation, runbooks, and operational updates</li>
</ul>
<p>The ideal candidate will have 3+ years of relevant experience in vulnerability management, security operations, application security, or related security engineering. Key skills include a strong understanding of vulnerability assessment fundamentals, hands-on experience with vulnerability management platforms, proficiency in scripting/automation for workflow support, and familiarity with cloud security concepts.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid life insurance, and flexible PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>vulnerability management, security operations, application security, vulnerability assessment fundamentals, vulnerability management platforms, scripting/automation for workflow support, cloud security concepts, security automation/SOAR platforms, container/Kubernetes vulnerability workflows, hardware-adjacent vulnerability domains, compliance evidence collection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4654263006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac95264-313</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>
<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4802840008</Applyto>
      <Location>Romania (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae715d1b-bea</externalid>
      <Title>Engineering Manager - Notebook Dataplane</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. In this role, you will lead the Notebook Dataplane team, which is responsible for running user code in the Notebook. We are undergoing an exciting architecture transformation to run stateful user code as a service for the product teams, providing a reliable and low-latency service for the Serverless products.</p>
<p>As the Engineering Manager, you will play a critical role in driving the technical vision, architecture, and execution for the service. You will lead a team of software engineers and recruit new team members to realize the vision.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and driving the stateful user code execution service vision.</li>
<li>Partnering with serverless platform teams to build the service.</li>
<li>Owning the roadmap and execution, ensuring all team deliverables are met with high quality and on schedule.</li>
<li>Defining team best practices for engineering excellence, including design reviews, code quality, testing strategies, and performance optimizations.</li>
<li>Collaborating cross-functionally with teams across the stack.</li>
</ul>
<p>We are looking for an experienced Engineering Manager with a strong track record of technical leadership and impact. The ideal candidate will have 10+ years of software engineering experience, 3+ years of engineering management experience, and expertise in distributed systems, cloud platforms, and modern web application architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,900-$253,750 USD</Salaryrange>
      <Skills>distributed systems, cloud platforms, modern web application architectures, software engineering, engineering management, containers, Kubernetes, system-level skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8190108002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d50772ab-afe</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p>We are seeking a Staff / Senior Software Engineer to join our Cloud Inference team. The successful candidate will design and build infrastructure that serves Claude across multiple cloud service providers (CSPs), accounting for differences in compute hardware, networking, APIs, and operational models.</p>
<p>The ideal candidate will have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users. They will also have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
</ul>
<ul>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
</ul>
<ul>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
</ul>
<ul>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
</ul>
<ul>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
</ul>
<ul>
<li>Optimise inference cost and performance across providers,designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
</ul>
<ul>
<li>Contribute to inference features that must work consistently across all platforms</li>
</ul>
<ul>
<li>Analyse observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
</ul>
<ul>
<li>Experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
</ul>
<ul>
<li>Strong interest in inference</li>
</ul>
<ul>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
</ul>
<ul>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
</ul>
<ul>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
</ul>
<ul>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p>Preferred skills:</p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
</ul>
<ul>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
</ul>
<ul>
<li>Hands-on experience with capacity management, cost optimisation, or resource planning at scale across heterogeneous environments</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation, batching, caching, and serving strategies</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p>Salary Range: $300,000-$485,000 USD</p>
<p>Experience Level: Staff</p>
<p>Employment Type: Full-time</p>
<p>Workplace Type: Hybrid</p>
<p>Category: Engineering</p>
<p>Industry: Technology</p>
<p>Required Skills:</p>
<ul>
<li>High-performance, large-scale distributed systems</li>
</ul>
<ul>
<li>Cloud computing (AWS, GCP, Azure)</li>
</ul>
<ul>
<li>Kubernetes</li>
</ul>
<ul>
<li>Infrastructure as Code</li>
</ul>
<ul>
<li>Container orchestration</li>
</ul>
<ul>
<li>Inference</li>
</ul>
<ul>
<li>Cross-functional collaboration</li>
</ul>
<ul>
<li>Autonomy and self-driven</li>
</ul>
<ul>
<li>Platform-agnostic tooling</li>
</ul>
<ul>
<li>Capacity management</li>
</ul>
<ul>
<li>Cost optimisation</li>
</ul>
<ul>
<li>Resource planning</li>
</ul>
<ul>
<li>LLM inference optimisation</li>
</ul>
<ul>
<li>Machine learning infrastructure</li>
</ul>
<ul>
<li>CI/CD systems</li>
</ul>
<ul>
<li>Multi-region deployments</li>
</ul>
<ul>
<li>Geographic routing</li>
</ul>
<ul>
<li>Global traffic management</li>
</ul>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Rust</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Direct experience working with CSP partner teams</li>
</ul>
<ul>
<li>Building platform-agnostic tooling</li>
</ul>
<ul>
<li>Hands-on experience with capacity management</li>
</ul>
<ul>
<li>Strong familiarity with LLM inference optimisation</li>
</ul>
<ul>
<li>Experience with Machine learning infrastructure</li>
</ul>
<ul>
<li>Background designing and building CI/CD systems</li>
</ul>
<ul>
<li>Solid understanding of multi-region deployments</li>
</ul>
<ul>
<li>Proficiency in Python or Rust</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$485,000 USD</Salaryrange>
      <Skills>high-performance, large-scale distributed systems, cloud computing (AWS, GCP, Azure), kubernetes, infrastructure as code, container orchestration, inference, cross-functional collaboration, autonomy and self-driven, platform-agnostic tooling, capacity management, cost optimisation, resource planning, llm inference optimisation, machine learning infrastructure, ci/cd systems, multi-region deployments, geographic routing, global traffic management, python, rust, direct experience working with csp partner teams, building platform-agnostic tooling, hands-on experience with capacity management, strong familiarity with llm inference optimisation, experience with machine learning infrastructure, background designing and building ci/cd systems, solid understanding of multi-region deployments, proficiency in python or rust</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing organisation with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f80914c-588</externalid>
      <Title>Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About Role</p>
<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>
<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>
<ul>
<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>
</ul>
<ul>
<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>
</ul>
<ul>
<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>
</ul>
<ul>
<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>
</ul>
<ul>
<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>
</ul>
<ul>
<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>
</ul>
<ul>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
</ul>
<ul>
<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>
</ul>
<ul>
<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>
</ul>
<p><strong>Key Qualifications</strong></p>
<ul>
<li>3+ years of experience working in software development covering distributed systems and databases.</li>
</ul>
<ul>
<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>
</ul>
<ul>
<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>
</ul>
<ul>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>
</ul>
<ul>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>
</ul>
<ul>
<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>
</ul>
<ul>
<li>Experience with ClickHouse is a plus.</li>
</ul>
<ul>
<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>
</ul>
<ul>
<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>
</ul>
<ul>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>
<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>
<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a global network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7267602</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42af3f66-4fc</externalid>
      <Title>AI Infrastructure Architect</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>AI Infrastructure Architect</strong></p>
<p>About the Role</p>
<p>We are looking for a smart and versatile AI Infrastructure Architect to build and evolve the AI infrastructure and platform that powers our identity security solutions. Your work will enable internal teams and product groups to integrate AI capabilities safely, securely, and at scale,empowering Okta’s mission to protect millions of digital identities worldwide. While your primary focus will be to architect scalable, secure, and resilient infrastructure supporting AI-driven tools, frameworks, and identity services, we value someone who isn’t afraid to get hands-on when needed to help solve complex challenges and drive projects forward.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Lead AI enablement initiatives, including proof-of-concepts for emerging AI infrastructure technologies and integration approaches.</li>
</ul>
<ul>
<li>Collaborate cross-functionally with engineering, security, data science, and product teams to align AI platform architecture with business and security goals.</li>
</ul>
<ul>
<li>Architect scalable, resilient, and secure AI infrastructure that supports AI-powered tools and features across Okta’s Identity Platform.</li>
</ul>
<ul>
<li>Lead infrastructure decisions across AWS, GCP, or hybrid environments with a focus on secure identity data handling</li>
</ul>
<ul>
<li>Develop and maintain infrastructure-as-code frameworks (e.g., Terraform, Helm) to ensure consistent, reproducible deployment of AI services</li>
</ul>
<ul>
<li>Champion security and compliance by embedding data privacy and identity protection standards directly into the AI platform and infrastructure design.</li>
</ul>
<ul>
<li>Serve as the key advocate and strategist for AI-driven efficiency initiatives across infrastructure platform teams and pre-production systems.</li>
</ul>
<ul>
<li>Implement robust MLOps practices, such as model evaluation, rollback strategies, and A/B testing, to guarantee the reliability and governance of AI in production.</li>
</ul>
<ul>
<li>Drive continuous innovation by staying current with AI and cloud infrastructure trends and evangelizing best practices internally.</li>
</ul>
<p><strong>Desired Qualifications</strong></p>
<ul>
<li>10+ years in infrastructure or software engineering, with ≥ 2 years building AI/ML systems</li>
</ul>
<ul>
<li>Exceptional systems level thinking and a track record in architecting and building enterprise grade infrastructure</li>
</ul>
<ul>
<li>Deep expertise in cloud platforms (AWS, GCP), distributed systems, and container orchestration (Kubernetes)</li>
</ul>
<ul>
<li>Expected to be very hands-on in order to create, review, and contribute large chunks of quality code</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience in identity, security, fraud, or risk analytics domains.</li>
</ul>
<ul>
<li>Experience operationalizing large language models or foundation models in production environments.</li>
</ul>
<ul>
<li>Contributions to MLOps or infrastructure open-source projects.</li>
</ul>
<p><strong>What You’ll Gain</strong></p>
<ul>
<li>Opportunity to lead infrastructure shaping AI systems that protect millions of identity transactions.</li>
</ul>
<ul>
<li>Be at the core of building efficient and AI powered enterprise grade solutions that touch internal and external customers alike.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$235,000-$353,000 USD</Salaryrange>
      <Skills>cloud platforms, distributed systems, container orchestration, infrastructure-as-code, MLOps, AI infrastructure, security and compliance, data privacy and identity protection, identity and security, fraud and risk analytics, large language models and foundation models, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a provider of identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7122284</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a438f945-411</externalid>
      <Title>Senior Site Reliability Engineer (Resilience) - Platform Resilience</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Site Reliability Engineer (SRE) to join our Platform Engineering department. As an SRE, you will lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure. You will grow our global Platform infrastructure to meet increasing scaling demands by developing and maintaining software, tooling, and automations.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain software, tooling, and automations to ensure the reliability and scalability of our global infrastructure.</li>
</ul>
<ul>
<li>Lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure.</li>
</ul>
<ul>
<li>Collaborate with engineers to identify, implement, and deliver solutions that meet the needs of our customers.</li>
</ul>
<ul>
<li>Champion an environment focused on collaboration, operational excellence, and uplifting others.</li>
</ul>
<ul>
<li>Respond to and prevent repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>
</ul>
<ul>
<li>Background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions.</li>
</ul>
<ul>
<li>Experience in public cloud and managed Kubernetes services is advantageous.</li>
</ul>
<ul>
<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Operated a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>
</ul>
<ul>
<li>Built or operated a Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>
</ul>
<ul>
<li>Written non-trivial programs in Golang or other programming languages.</li>
</ul>
<ul>
<li>Worked with containerized services (such as Docker).</li>
</ul>
<ul>
<li>Proven experience in leading and improving alerting and major incident management standard processes metrics systems (e.g. Elastic Stack, Graphite, Prometheus, Influx) to diagnose issues and quantify impacts to present to others at varying levels of the organization.</li>
</ul>
<ul>
<li>Experienced in system administration with professional skills in Linux on distributed systems at scale.</li>
</ul>
<ul>
<li>Diagnosed or designed, implemented, and created solutions with the Elastic Stack.</li>
</ul>
<ul>
<li>Thrived in a self-organizing and sharing in a globally distributed team environment.</li>
</ul>
<ul>
<li>Strengthened team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>
</ul>
<p>Compensation:</p>
<ul>
<li>This role is eligible to participate in Elastic&#39;s stock program.</li>
</ul>
<ul>
<li>Total rewards package includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</li>
</ul>
<ul>
<li>Typical starting salary range for this role is $154,800-$195,600 USD.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,800-$195,600 USD</Salaryrange>
      <Skills>Software engineering, Public cloud, Managed Kubernetes services, Infrastructure-as-Code tooling, Containerized services, System administration, Linux on distributed systems, Golang, Crossplane, Terraform, Docker, Elastic Stack, Graphite, Prometheus, Influx</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic develops a search and analytics platform used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7794016</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>74be15a1-bce</externalid>
      <Title>Software Engineer, Inference Deployment</Title>
      <Description><![CDATA[<p>Our mandate is to make inference deployment boring and unattended. We serve Claude to millions of users across GPUs, TPUs, and Trainium , and every model update must reach production safely, quickly, and without disrupting service. As a Software Engineer on the Launch Engineering team, you&#39;ll design and build the deployment infrastructure that moves inference code from merge to production.</p>
<p>This is a resource-constrained optimization problem at its core: validation and deployment consume the same accelerator chips that serve customer traffic , your deploys compete with live user requests for the same hardware. Every model brings different fleet sizes, startup times, and correctness requirements, so the system must adapt continuously. You&#39;ll build systems that navigate these constraints , orchestrating validation, scheduling deployments intelligently, and driving down cycle time from merge to production.</p>
<p>Responsibilities:</p>
<ul>
<li>Own deployment orchestration that continuously moves validated inference builds into production across GPU, TPU, and Trainium fleets, unattended under normal conditions</li>
</ul>
<ul>
<li>Improve capacity-aware deployment scheduling to maximize deployment throughput against constrained accelerator budgets and variable fleet sizes</li>
</ul>
<ul>
<li>Extend deployment observability , dashboards and tooling that answer &quot;what code is running in production,&quot; &quot;where is my commit,&quot; and &quot;what validation passed for this deploy&quot;</li>
</ul>
<ul>
<li>Drive down cycle time from code merge to production with pipeline architectures that minimize serial dependencies and maximize parallelism</li>
</ul>
<ul>
<li>Optimize fleet rollout strategies for large-scale deployments across thousands of GPU, TPU, and Trainium chips, minimizing disruption to serving capacity</li>
</ul>
<ul>
<li>Evolve self-service model onboarding so that new models can be added to the continuous deployment pipeline without Launch Engineering involvement</li>
</ul>
<ul>
<li>Partner across the Inference organization with teams owning validation, autoscaling, and model routing to integrate deployment automation with their systems</li>
</ul>
<p>You May Be a Good Fit If You Have:</p>
<ul>
<li>5+ years of experience building deployment, release, or delivery infrastructure at scale</li>
</ul>
<ul>
<li>Strong software engineering skills with experience designing systems that manage complex state machines and multi-stage pipelines</li>
</ul>
<ul>
<li>Experience with deployment systems where resource constraints shape the design , whether that&#39;s fleet capacity, network bandwidth, hardware availability, or coordinated rollout windows</li>
</ul>
<ul>
<li>A track record of building automation that measurably improves deployment velocity and reliability</li>
</ul>
<ul>
<li>Proficiency with Kubernetes-based deployments, rolling update mechanics, and container orchestration</li>
</ul>
<ul>
<li>Comfort working across the stack , from backend services and databases to CLI tools and web UIs</li>
</ul>
<ul>
<li>Strong communication skills and the ability to work closely with oncall engineers, model teams, and infrastructure partners</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience with ML inference or training infrastructure deployment, particularly across multiple accelerator types (GPU, TPU, Trainium)</li>
</ul>
<ul>
<li>Background in capacity planning or resource-constrained scheduling (e.g., bin-packing, fleet management, job scheduling with hardware affinity)</li>
</ul>
<ul>
<li>Experience with progressive delivery in systems with long validation cycles: canary/soak testing, blue-green deployments, traffic shifting, automated rollback</li>
</ul>
<ul>
<li>Experience at companies with large-scale release engineering challenges (mobile release trains, monorepo deployments, multi-datacenter rollouts)</li>
</ul>
<ul>
<li>Experience with Python and/or Rust in production systems</li>
</ul>
<p>The annual compensation range for this role is $320,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>deployment infrastructure, software engineering, complex state machines, multi-stage pipelines, Kubernetes-based deployments, container orchestration, backend services, databases, CLI tools, web UIs, ML inference, training infrastructure deployment, capacity planning, resource-constrained scheduling,  deployments, progressive delivery, Python, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111745008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f5d87e3c-d74</externalid>
      <Title>Offensive Security Engineer</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer at CoreWeave, you will lead efforts to identify and mitigate security risks across internal and external systems.</p>
<p>You&#39;ll perform penetration testing, conduct threat modeling, and provide guidance to engineering teams on secure design and best practices. This role also involves developing security tooling, researching emerging threats, and contributing to the continuous improvement of CoreWeave&#39;s overall security posture.</p>
<p>Some of what you&#39;ll work on:</p>
<ul>
<li>Perform penetration testing as well as purple and red team exercises.</li>
<li>Conduct threat modeling, code reviews, and design reviews for development teams.</li>
<li>Research new attack techniques and develop strategies to counter them.</li>
<li>Develop and enforce security best practices and standards, maintaining internal compliance.</li>
<li>Provide solutions to complex security issues, manage multiple tasks, and prioritize effectively in a fast-paced environment.</li>
<li>Present technical security information to both technical and non-technical audiences.</li>
<li>Maintain technical documentation, reports, and security tooling with attention to detail.</li>
<li>Participate in other security-related initiatives as assigned.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of experience in offensive information security roles.</li>
<li>Proficiency in at least one programming or scripting language (e.g., Go, Python, C/C++) for automation, code reviews, and tooling.</li>
<li>Hands-on penetration testing experience and familiarity with offensive security tools.</li>
<li>Strong technical knowledge of Linux operating systems and containerized environments.</li>
<li>Experience securing Kubernetes and understanding related security practices.</li>
<li>Able to navigate ambiguity, identify root causes, and solve complex security problems.</li>
<li>Excellent written and verbal communication skills with strong technical documentation abilities.</li>
<li>Capable of working independently while managing multiple priorities in a fast-paced environment.</li>
<li>Strong desire to continuously learn and adopt new technologies and security techniques.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience with firmware reverse engineering, analyzing binaries, bootloaders, and embedded systems for vulnerabilities.</li>
<li>Relevant certifications such as Sec+, Net+, OSCP, or equivalent.</li>
<li>Experience with EDR tuning, detections-as-code, or threat hunting as part of a Blue Team.</li>
<li>Deep understanding of business-wide security best practices and implementation strategies.</li>
</ul>
<p>Wondering if you&#39;re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match.</p>
<p>Here are a few qualities we&#39;ve found compatible with our team.</p>
<p>If some of this describes you, we&#39;d love to talk.</p>
<ul>
<li>You love hunting vulnerabilities and proactively improving security.</li>
<li>You&#39;re curious about evolving attack vectors and defense strategies.</li>
<li>You&#39;re an expert in offensive security techniques and tooling, with a passion for safeguarding systems.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast!</p>
<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>
<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000.</p>
<p>The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p>We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we&#39;ve posted represents the typical compensation range for this role.</p>
<p>To determine actual compensation, we review the market rate for each candidate which can include a variety of factors.</p>
<p>These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the information under an appropriate export license, or (C) otherwise exempt from the regulations.</p>
<p>Applicant must also comply with all applicable laws and regulations related to the handling and transfer of export-controlled information.</p>
<p>By applying for this position, applicant acknowledges that they have read, understand, and will comply with these requirements.</p>
<p>Failure to comply with these requirements may result in termination of employment, revocation of any security clearances, or other disciplinary action.</p>
<p>Applicant must also agree to undergo a background investigation and obtain any necessary security clearances prior to commencing employment.</p>
<p>Please note that this position is subject to U.S. Government export regulations and may require applicant to sign a non-disclosure agreement (NDA) prior to commencing employment.</p>
<p>Applicant must also agree to comply with all applicable laws and regulations related to the handling and transfer of export-controlled information.</p>
<p>By applying for this position, applicant acknowledges that they have read, understand, and will comply with these requirements.</p>
<p>Failure to comply with these requirements may result in termination of employment, revocation of any security clearances, or other disciplinary action.</p>
<p>Applicant must also agree to undergo a background investigation and obtain any necessary security clearances prior to commencing employment.</p>
<p>Please note that this position is subject to U.S. Government export regulations and may require applicant to sign a non-disclosure agreement (NDA) prior to commencing employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>programming or scripting language, penetration testing, threat modeling, code reviews, design reviews, security best practices, Linux operating systems, containerized environments, Kubernetes, security practices, firmware reverse engineering, analyzing binaries, bootloaders, embedded systems, EDR tuning, detections-as-code, threat hunting, business-wide security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4657803006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a04ec5-ae9</externalid>
      <Title>Machine Learning Engineer II</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer II to join our Growth Platform engineering group. As a Machine Learning Engineer II, you will be responsible for developing and implementing ML models to improve user targeting and personalization for growth initiatives. You will design and build scalable ML pipelines for data processing, model training, and deployment. You will collaborate with cross-functional teams to identify potential ML solutions for growth opportunities. You will conduct A/B tests to evaluate the performance of ML models and optimize their impact on key growth metrics. You will analyze large datasets to extract insights and inform decision-making for user acquisition and retention strategies. You will contribute to the development of our ML infrastructure, ensuring it can support rapid experimentation and deployment. You will stay up-to-date with the latest advancements in ML and recommend new techniques to enhance our growth efforts. You will participate in code reviews and collaborate with team members as needed. You will thoughtfully leverage AI tools to speed up design, coding, debugging, and documentation, while applying your own critical thinking to validate outputs and explain how you used AI in your workflow. You will shape our AI-assisted engineering practices by sharing patterns, guardrails, and learnings with the team so we can safely increase our impact without compromising code quality, reliability, or candidate expectations.</p>
<p>To be successful in this role, you will need to have 3+ years of experience applying ML to real-world problems, preferably in a growth or user acquisition context. You will need to have excellent communication skills and the ability to work effectively in cross-functional teams. You will need to have strong problem-solving skills and the ability to translate business requirements into technical solutions. You will need to have strong programming skills in Python and experience with PyTorch. You will need to have proficiency in data processing and analysis using tools like SQL, Spark, or Hadoop. You will need to have experience with recommendation systems, user modeling, or personalization algorithms. You will need to have familiarity with statistical analysis. You will need to have experience using AI coding assistants and agentic tools as a force-multiplier, and equally comfortable solving problems from first principles when those tools aren’t available. You will need to have a Bachelor’s/Master’s degree in a relevant field or equivalent experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, SQL, Spark, Hadoop, Recommendation systems, User modeling, Personalization algorithms, Statistical analysis, AI coding assistants, Natural Language Processing, Data visualization, Cloud platforms, Containerization technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7681666</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>766c8fe9-693</externalid>
      <Title>IT Operations Specialist</Title>
      <Description><![CDATA[<p>CoreWeave is seeking an IT Operations Specialist to play a key role in supporting and scaling its internal IT environment. As an IT Operations Specialist, you will blend hands-on end-user and systems support with automation, platform ownership, and process improvement. You will work daily across identity, endpoints, SaaS platforms, and office infrastructure, while contributing to repeatable, scalable solutions that support a growing, distributed workforce.</p>
<p>This role requires strong technical depth, sound operational judgment, and comfort operating in a fast-moving environment. You will work closely with Security, Systems Engineering, People Ops, and Engineering to support the full employee lifecycle while continuously improving reliability, automation, and operational maturity.</p>
<p>Key responsibilities include administering identity and access management platforms, managing macOS and Windows endpoints, administering ITSM platforms, and troubleshooting across SaaS, endpoint, identity, and network layers. You will also create and maintain technical documentation for systems and operational procedures.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, ability to participate in employee stock purchase program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible PTO, catered lunch each day in our office and data center locations, and a casual work environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$98,000 to $130,000</Salaryrange>
      <Skills>identity and access management platforms, macOS and Windows endpoints, ITSM platforms, troubleshooting across SaaS, endpoint, identity, and network layers, technical documentation for systems and operational procedures, scripting experience in Python, Bash, or PowerShell, familiarity with Terraform or other infrastructure-as-code tools for automation, Kubernetes-based or containerized environments, compliance frameworks such as SOC 2 or ISO 27001, integrating SaaS platforms via APIs or automation tooling, office network topology, hardware, and physical infrastructure, high-growth startup or scale-up environments</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing platform provider that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4664227006</Applyto>
      <Location>Dallas, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd44a200-1ac</externalid>
      <Title>Director of Engineering (Service Foundations)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Service Foundations)</p>
<p>We are seeking a seasoned Director of Engineering to lead our Service Foundations team. As a key member of our executive engineering team, you will be responsible for building and operating distributed systems, driving company-wide efficiency, reliability, and automation.</p>
<p>In this role, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical infrastructure initiatives that integrate AI-driven tooling directly into the infrastructure itself to make it more adaptive, scalable, and intelligent.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Solve real business needs at a large scale by applying your software engineering expertise</li>
<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>
<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>
<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>
<li>Lead and participate in technical, product, and design discussions</li>
<li>Build, manage, and operate highly scalable services in the cloud</li>
<li>Grow leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partner with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Create a culture of excellence on the team while leading with empathy</li>
</ul>
<p>Requirements:</p>
<ul>
<li>20+ years of industry experience building and operating large-scale distributed systems</li>
<li>Proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads</li>
<li>Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions</li>
<li>Ability to translate requirements from internal engineering teams into clear priorities and execution plans</li>
<li>Fluent across the infrastructure stack , storage, orchestration, observability, and developer platforms , with intuition for how these layers interact</li>
<li>Ability to evaluate and evolve abstractions , knows when to unify, when to localise, and how to reduce cognitive load for product teams</li>
<li>BS in Computer Science (Masters or PhD preferred)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratise data, analytics, and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure systems, Distributed systems, Infrastructure as Code, Containerisation, Orchestration, Observability, Developer platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201768002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a814df90-b97</externalid>
      <Title>Staff Software Engineer, Applied Training</Title>
      <Description><![CDATA[<p>We&#39;re building the Applied Training team to fix the problem of researchers spending their first month on cluster setup instead of research. You&#39;ll be an early member of a small team, responsible for our Kubernetes-native research cluster platform, or the sandbox client for agentic training and evaluation, or possibly a new project altogether.</p>
<p>Your responsibilities will include contributing to the roadmap for Applied Training, designing and building a complete research cluster experience, owning the Python SDK, and writing documentation for running popular OSS training frameworks on CoreWeave.</p>
<p>You&#39;ll work with infrastructure teams and customers directly, understanding how they structure their internal supercomputing stacks and bringing that knowledge back to what we build.</p>
<p>As a staff software engineer, you&#39;ll have 8-12+ years of experience building distributed systems, ML infrastructure, or developer platforms, with real Kubernetes experience and a passion for rigorous engineering enabled by AI-based workflows.</p>
<p>You&#39;ll be a good communicator, able to work with customers, translate researcher complaints into system designs, and contribute to the growth and success of our team.</p>
<p>If you&#39;re excited about this opportunity, please apply!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Distributed systems, ML infrastructure, Developer platforms, Python, SDK development, Documentation writing, Agentic AI, RL training, Sandbox isolation, Container runtimes, Isolation, Serverless platforms, OSS contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647607006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0396ac1c-dad</externalid>
      <Title>Senior Staff Engineer, Cloud Economics</Title>
      <Description><![CDATA[<p>Reddit is a community of communities. It&#39;s built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet.</p>
<p>The Ads Foundations organization is responsible for the technical backbone powering Ads Monetization at scale. Within this ecosystem, efficient resource utilization is critical.</p>
<p>We are seeking a Senior Staff Engineer to serve as the Cloud Resources Technical Owner for the Ads Domain. You will be the primary engineering point of contact for the Senior Director in Ads and Cloud Operations/Resources (COR &amp; Opex) stakeholders.</p>
<p><strong>Responsibilities</strong></p>
<p>Technical Vision &amp; Strategy</p>
<ul>
<li>Define and drive the technical strategy for Cloud Resource management within Ad first, ensuring that cost accountability is built into the architecture of our systems.</li>
<li>High-Fidelity Investment Modeling: Elevate cloud estimation from guesswork to a rigorous engineering discipline. You will lead the high-quality forecasting of new cloud investments and efficiency projects, designing data-driven models to validate technical ROI before builds happen</li>
<li>Design and implement a roadmap for Cost Observability 2.0, moving beyond simple reporting to real-time, service/team-level spend attribution and automated anomaly detection.</li>
</ul>
<p>Engineering &amp; Tooling Leadership</p>
<ul>
<li>Design and build internal platforms that programmatically enforce PnL accountability. You will engineer (or collaborate with Core Infrastructure partners) to deliver the dashboards, alerts, and governance tools that every Ads team relies on to manage their cloud footprint.</li>
<li>Architect automated frameworks for validating cost estimates and forecasting, replacing manual spreadsheets with data-driven software solutions.</li>
</ul>
<p>Scale &amp; Optimization</p>
<ul>
<li>Fight for observability by instrumenting deep telemetry into our cloud infrastructure. You will be hands-on in identifying inefficiencies (e.g., underutilized clusters, uncompressed data flows) and re-architecting critical paths for cost reduction.</li>
<li>Lead the technical validation of vendor and 3rd-party tool integration, ensuring we extract maximum engineering value from every dollar spent.</li>
</ul>
<p>Cultural &amp; Technical Stewardship</p>
<ul>
<li>Act as a role model for the Ads domain and the wider company. You will set the standard for how engineering teams think about Cost as a Non Functional Requirement, eventually scaling these patterns to other domains.</li>
<li>Partner with Finance and Engineering leadership to translate Cloud Spend into actionable engineering tasks (e.g., refactor Service X to use Spot instances).</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience, with a strong focus on public cloud infrastructure (AWS/GCP/Azure) and large-scale distributed systems.</li>
<li>Engineer-First Mindset: You are comfortable writing code (Go, Python, Java) to solve infrastructure problems. You don&#39;t just ask for a report; you build the API that generates it.</li>
<li>Deep Cloud Expertise: You have mastery over Kubernetes, container orchestration, and cloud-native storage, understanding exactly how architectural choices impact the bottom line.</li>
<li>Operational Excellence: Proven track record of building observability pipelines (Prometheus, Grafana, Datadog) that drive operational and financial alerts.</li>
<li>Influential Leader: Skilled at driving clarity in ambiguous spaces. You can convince a Principal Engineer to refactor their service for cost efficiency because you can prove the technical and business value.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience building custom FinOps tooling or internal developer platforms.</li>
<li>Background in performance engineering or capacity planning for high-traffic ad tech environments.</li>
<li>Contributions to open-source projects related to cloud efficiency or observability.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$232,500-$325,500 USD</Salaryrange>
      <Skills>public cloud infrastructure, large-scale distributed systems, Kubernetes, container orchestration, cloud-native storage, observability pipelines, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7628291</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>755c5895-997</externalid>
      <Title>Manager, Product Engineering</Title>
      <Description><![CDATA[<p>At Instabase, we&#39;re committed to democratizing access to cutting-edge AI innovation. Our market opportunity is vast, with customers representing some of the largest and most complex organisations in the world. As an Manager, Product Engineering, you will lead a team responsible for the full-stack development of enterprise software, working closely with cross-functional teams to design and deliver high-impact solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Team Leadership – Build, manage, and develop a team of high-performing engineers, providing mentorship and career development while fostering a collaborative and inclusive culture.</li>
<li>Cross-Functional Collaboration – Partner with product, design, and technical writing teams to define the roadmap and drive execution.</li>
<li>End-to-End Execution – Oversee the entire software development lifecycle, from capacity planning and roadmapping to prototyping and production deployment.</li>
<li>Technical Leadership – Contribute to technical discussions and architectural decisions within your product area.</li>
<li>Quality &amp; Operational Excellence – Establish and uphold best practices to maintain a high-quality bar for all deliverables, ensuring reliability, scalability, and usability.</li>
<li>Innovation &amp; AI Integration – Leverage modern AI tools to improve team productivity and enhance product capabilities.</li>
</ul>
<p>About You:</p>
<ul>
<li>Experience – 5+ years of engineering management experience, with a track record of building and leading high-performing teams.</li>
<li>AI &amp; Data Expertise – Strong background in AI, ML, and data-driven products, with experience building and scaling intelligent applications.</li>
<li>Startup Mentality – Comfortable operating in a fast-paced startup environment, navigating ambiguity, and driving impactful results.</li>
<li>Technical Proficiency – Deep knowledge of modern technology stacks, including cloud infrastructure, container orchestration systems, TypeScript, React, and related tools.</li>
<li>SaaS &amp; Enterprise Experience – Proven ability to deliver SaaS-based enterprise software solutions at scale.</li>
<li>Process &amp; Productivity – Experience implementing SDLC, and leveraging modern productivity software (Jira, Confluence, Figma, etc.).</li>
<li>AI-Driven Development – Passion for integrating modern AI tools to optimise development workflows.</li>
</ul>
<p>Compensation: The base salary range for this role is $280,000 to $300,000 + bonus, equity, and benefits.</p>
<p>Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
<li>Lunch on Us: Enjoy a lunch credit when you’re in the office.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$280,000 to $300,000 + bonus, equity, and benefits</Salaryrange>
      <Skills>AI, ML, data-driven products, cloud infrastructure, container orchestration systems, TypeScript, React, SaaS-based enterprise software solutions, SDLC, productivity software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase is a global company with offices in San Francisco and Bengaluru, offering a consumption-based pricing model for customers to access its AI Hub platform features.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8419974002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>041d015e-3b6</externalid>
      <Title>Senior Software Engineer (CI) - Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Software Engineer to join our team, focusing on the software build, test, and release processes for Elastic Agent. This role extends from the CI/CD systems which run automated test and release processes to the build tooling which underpins a complex Golang project.</p>
<p>Key responsibilities include ensuring the test framework for Elastic Agent consistently delivers accurate test results to developers quickly and cost-effectively, producing automated CI analytics to quantify business impact, surface bottlenecks, and prioritize improvements, implementing a curated testing strategy, managing flaky tests, and maintaining an up-to-date support matrix.</p>
<p>The ideal candidate will have experience with Golang, BBBuildkite, and complex cross-platform test and deployment pipelines. They will also possess strong communication and emotional intelligence skills, with the ability to work on a distributed team of engineers around the world.</p>
<p>As a Senior Software Engineer, you will play a key role in shaping the future of our platform and contributing to the success of our customers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Buildkite, CI/CD, Test automation, Containerization, Security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic provides a cloud-based platform for search, security, and observability, serving over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7525644</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f94dea6d-70a</externalid>
      <Title>Distributed Systems Engineer - Data Platform - Analytical Database Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About Role</p>
<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>
<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>
<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>
<ul>
<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>
<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>
<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>
<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>
<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>
</ul>
<p>Key qualifications:</p>
<ul>
<li>3+ years of experience working in software development covering distributed systems, and databases.</li>
<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>
<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>
<li>Experience with ClickHouse is a plus.</li>
<li>Experience with SALT or Terraform is a plus.</li>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/4886734</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f8a39f0-f7c</externalid>
      <Title>Senior Software Engineer - Artifact Management</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Senior Software Engineer - Artifact Management to join our team. As a Senior Software Engineer - Artifact Management, you will be responsible for designing and implementing distributed storage and caching solutions for artifacts, evaluating and exploring third-party solutions, developing APIs and services for artifact publishing, retrieval, and version management, optimizing performance, reliability, and cost efficiency across multi-region deployments, working closely with build, release, and infrastructure teams to ensure seamless integration into developer workflows, driving observability, automation, and resilience in a high-traffic production environment by creating dashboards, metrics, and alerts, and partnering with cross-functional teams to implement best practices and drive migration from legacy systems.</p>
<p>The ideal candidate will have a bachelor&#39;s degree in Computer Science, Software Engineering, or a related field, 4+ years of experience in a software or infrastructure engineering industry, strong experience operating services in production and at scale, deep experience with Go as the primary programming language, experience with infrastructure-as-code, CI/CD systems, and containerization, understanding of system design, scalability, and efficiency, extensive experience with Artifactory, Cloudsmith, and passion for improving developer experience and enabling other engineers to do their best work.</p>
<p>In addition to the above requirements, preferred qualifications include experience integrating or enabling tools that leverage LLMs or code intelligence for developers, experience with KubeVirt, KataContainers, and a willingness to learn and adapt to new technologies and processes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>Go, Infrastructure-as-code, CI/CD systems, Containerization, System design, Scalability, Efficiency, Artifactory, Cloudsmith, LLMs or code intelligence for developers, KubeVirt, KataContainers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4612039006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>72ebb09d-b37</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09c520cf-f62</externalid>
      <Title>Systems Engineer, Kernel</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>
<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>
<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>
<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>
<p>Our Team&#39;s Stack:</p>
<ul>
<li>Python, Go, bash/sh, C</li>
</ul>
<ul>
<li>Prometheus, Victoria Metrics, Grafana</li>
</ul>
<ul>
<li>Linux Kernel (custom build), Ubuntu</li>
</ul>
<ul>
<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>
</ul>
<ul>
<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>
</ul>
<p>Focus Areas:</p>
<ul>
<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>
</ul>
<ul>
<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>
</ul>
<ul>
<li>Stack-Wide Support – Ensure kernel support and stability across:</li>
</ul>
<ul>
<li>Virtualization (KubeVirt, QEMU, vFIO)</li>
</ul>
<ul>
<li>Container runtimes (containerd, nydus, kubelet)</li>
</ul>
<ul>
<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>
</ul>
<ul>
<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>
</ul>
<ul>
<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>
</ul>
<p>About the role:</p>
<ul>
<li>Triage and fix kernel crashes and performance regressions.</li>
</ul>
<ul>
<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>
</ul>
<ul>
<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>
</ul>
<ul>
<li>Implement diagnostics and tooling for kernel-level observability.</li>
</ul>
<ul>
<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>
</ul>
<ul>
<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>
</ul>
<ul>
<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>
</ul>
<ul>
<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>
</ul>
<ul>
<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>
</ul>
<ul>
<li>Experience working with kernel modules, drivers, and subsystems.</li>
</ul>
<ul>
<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Contributions to the Linux kernel or related open-source projects.</li>
</ul>
<ul>
<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>
</ul>
<ul>
<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>
</ul>
<ul>
<li>GPU/DPU bring-up and driver experience.</li>
</ul>
<ul>
<li>Experience in HPC or large-scale distributed systems.</li>
</ul>
<ul>
<li>Familiarity with QA/QE best practices</li>
</ul>
<ul>
<li>Experience working in Cloud environments</li>
</ul>
<ul>
<li>Experience as a software engineer writing large-scale applications</li>
</ul>
<ul>
<li>Experience with machine learning is a huge bonus</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance &amp; stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4599319006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>92d63795-0ea</externalid>
      <Title>Principal Systems Engineer, M&amp;A</Title>
      <Description><![CDATA[<p>The Infrastructure Engineering organization is seeking an accomplished Principal Systems Engineer to lead our acquisition integration engineering practice. This pivotal role will own the end-to-end infrastructure engineering lifecycle of integrating newly acquired businesses into Anduril&#39;s existing ecosystem.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define, establish, and lead the infrastructure integration engineering practice, setting the technical vision and strategy for integrating new entities and technologies.</li>
<li>Single-threaded ownership of all infrastructure component(s) of acquisition integration from discovery and due diligence through migration execution and hypercare.</li>
<li>Conduct comprehensive technical assessments of target companies&#39; infrastructure, systems, and operational capabilities, identifying risks and opportunities.</li>
<li>Develop and present high-level architectural strategies and detailed roadmaps for integrations to executive leadership, founders, and technical teams.</li>
<li>Design and implement robust, scalable, and secure system architectures for integrated environments, ensuring alignment with Anduril&#39;s overall technology strategy.</li>
<li>Develop and execute detailed migration plans, managing complex technical challenges and dependencies.</li>
<li>Provide post-migration hypercare support, ensuring a smooth transition and stabilization of integrated systems.</li>
<li>Define, document, and continuously improve repeatable processes to accelerate acquisition integration, establishing benchmarks, conducting post-mortems, and implementing lessons learned.</li>
<li>Identify, evaluate, and implement or scope the development of new tools and technologies to enhance discovery, migration, and testing efficiency.</li>
<li>Collaborate closely with Security teams to ensure all integrated systems meet Anduril&#39;s stringent security requirements and policies.</li>
<li>Partner with Client Engineering teams to ensure seamless integration of acquired client and client-facing technologies and services.</li>
<li>Provide clear, concise, and opinionated technical guidance, and proactively push back on misaligned proposals to ensure successful technical outcomes.</li>
<li>Act as a technical authority, mentor, and trusted advisor to engineering teams involved in integration efforts.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Minimum of 12 years of progressive experience in Systems Engineering, Network Engineering, and/or IT Infrastructure roles with a focus on complex, enterprise-scale environments.</li>
<li>Self-sufficient ability to execute in (technical and non-technical) program management, architecture, and hands-on engineering capacities.</li>
<li>Demonstrated expertise in defining and building engineering practices and repeatable processes.</li>
<li>Proven ability to operate across the entire engineering lifecycle, from strategic discovery and architecture to hands-on execution and hypercare.</li>
<li>Exceptional ability to communicate complex technical concepts to diverse audiences, including C-suite executives, founders, and engineering teams.</li>
<li>Deep understanding of modern cloud architectures (AWS, Azure, GCP), hybrid cloud solutions, and on-premises infrastructure.</li>
<li>Extensive experience with enterprise networking technologies, including routing, switching, firewalls, VPNs, and load balancing.</li>
<li>Strong knowledge of server virtualization, containerization technologies (e.g., Docker, Kubernetes), and operating systems (Linux, Windows).</li>
<li>Experience with identity and access management (IAM) solutions, single sign-on (SSO), and multi-factor authentication (MFA).</li>
<li>Proficiency in scripting and automation for infrastructure deployment and management (e.g., Python, Ansible, Terraform).</li>
<li>Strong understanding of security principles, best practices, and common vulnerabilities within systems and networks.</li>
<li>Familiarity with client engineering principles and technologies.</li>
<li>Proven experience in identifying tooling gaps and either developing solutions or effectively scoping them for development.</li>
<li>Excellent analytical, problem-solving, and critical thinking skills.</li>
<li>Ability to travel for remote deployments and assessments as required.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with infrastructure-as-code (IaC) principles and tools.</li>
<li>Familiarity with CI/CD pipelines and DevOps methodologies.</li>
<li>Experience with data center design and operations.</li>
<li>Experience in the defense technology or highly regulated industries.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>Systems Engineering, Network Engineering, IT Infrastructure, Cloud Architectures, Hybrid Cloud Solutions, On-Premises Infrastructure, Enterprise Networking Technologies, Server Virtualization, Containerization Technologies, Operating Systems, Identity and Access Management, Single Sign-On, Multi-Factor Authentication, Scripting and Automation, Infrastructure Deployment and Management, Infrastructure-as-Code, CI/CD Pipelines, DevOps Methodologies, Data Center Design and Operations, Defense Technology, Highly Regulated Industries</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5111019007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6556c9a6-357</externalid>
      <Title>Senior Professional Services, Technical Architect - AI</Title>
      <Description><![CDATA[<p>As a Senior Professional Services Technical Architect, AI at GitLab, you&#39;ll be an embedded expert who helps customers move from ideas to production. You&#39;ll work directly with customer teams as a consultative partner, running in-depth discovery to understand their environment and priorities, then designing and delivering solutions that connect business goals to architecture and implementation.</p>
<p>This is a deeply technical, customer-facing role where you&#39;ll build and deploy Custom Agents, Custom Flows, and CI/CD integrations. You&#39;ll own delivery end-to-end, from prototype through production support. You&#39;ll partner closely with Professional Services and Customer Success stakeholders, including Professional Services Engineers, Project Managers, Customer Success Managers, and Solution Architects.</p>
<p>Some examples of our projects include leading customer discovery and defining a prioritized GitLab Duo Agent Platform use case roadmap tied to clear success criteria, designing and delivering production-ready GitLab Duo Agent Platform implementations, building rapid prototypes to demonstrate the art of the possible with agentic AI, and integrating the GitLab Duo Agent Platform with customer systems and workflows using GitLab APIs, pipeline configuration, and infrastructure as code.</p>
<p>What you&#39;ll do:</p>
<p>Conduct deep customer discovery to understand business goals, technical constraints, and organizational dynamics, and translate them into clear problem statements and a prioritized use case plan for GitLab Duo Agent Platform.</p>
<p>Partner with customer stakeholders across engineering, security, compliance, and business teams to align on success criteria, milestones, and adoption strategy for AI workflows in production.</p>
<p>Design, build, and deploy production-ready GitLab Duo Agent Platform solutions, including Custom Agents, Custom Flows, and CI/CD integrations that map to validated customer use cases.</p>
<p>Embed with customer engineering teams to deliver hands-on implementations end-to-end, from prototype to production rollout, troubleshooting, and optimization.</p>
<p>Configure and integrate platform foundations such as runners, network access, runtime sandboxing, GitLab APIs (REST and GraphQL), and AI governance controls (for example, role-based access control and model policies) to meet enterprise requirements.</p>
<p>Measure and communicate impact using DORA (DevOps Research and Assessment) metrics, AI Impact Analytics, and Value Stream Analytics, and use those insights to guide iteration and expansion of successful use cases.</p>
<p>Codify repeatable deployment patterns, reusable assets, and lessons learned, contributing back to GitLab through documentation, accelerators, and product feedback informed by field experience.</p>
<p>Travel up to 50% for customer site engagements and company onsite events to support delivery, onboarding, and stakeholder alignment.</p>
<p>What you&#39;ll bring:</p>
<p>Demonstrated experience leading customer-facing technical engagements, from discovery through production rollout, with ownership of outcomes.</p>
<p>Proficiency in Python, with experience building and operating production-grade applications and integrations.</p>
<p>Experience delivering with GitLab CI/CD, including pipeline design, YAML configuration, and using GitLab APIs (REST and GraphQL).</p>
<p>Hands-on experience with infrastructure as code (for example, Terraform or Ansible) and deploying solutions into enterprise environments.</p>
<p>Working knowledge of large language model (LLM) capabilities and limitations, including prompt engineering and building agentic workflows (such as Custom Agents and Custom Flows).</p>
<p>Experience with Docker, container orchestration concepts, and runner configuration in secure environments.</p>
<p>Familiarity with DevSecOps practices, including security controls, access management, and compliance requirements that impact deployment design.</p>
<p>Strong written and verbal communication skills, with the ability to partner closely with customer stakeholders and translate business goals into technical plans in a remote, asynchronous environment.</p>
<p>About the team:</p>
<p>GitLab&#39;s Professional Services organization within Customer Success helps customers get value from the GitLab Duo Agent Platform. We&#39;re a remote, asynchronous team that works closely with customer-facing colleagues to support successful deployments. We focus on turning what we learn in the field into reusable assets, clearer documentation, and product feedback that helps improve GitLab Duo Agent Platform for future customers.</p>
<p>The base salary range for this role’s listed level is currently for residents of the United States only. This range is intended to reflect the role&#39;s base salary rate in locations throughout the US. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, alignment with market data, and geographic location. The base salary range does not include any bonuses, equity, or benefits. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary. United States Salary Range $164,880-$247,320 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,880-$247,320 USD</Salaryrange>
      <Skills>Python, GitLab CI/CD, Infrastructure as Code, Docker, Container Orchestration, DevSecOps, Large Language Model (LLM), Prompt Engineering, Agentic Workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8334735002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f296b6b0-e66</externalid>
      <Title>Senior Software Security Engineer</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Security Engineer</p>
<p>About the Role: The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.</p>
<p>Responsibilities:</p>
<ul>
<li>Build security for large-scale AI clusters, implementing robust cloud security architecture including IAM, network segmentation, and encryption controls</li>
</ul>
<ul>
<li>Design secure-by-design workflows, secure CI/CD pipelines across our services, help build secure cloud infrastructure, with expertise in various cloud environments, Kubernetes security, container orchestration and identity management</li>
</ul>
<ul>
<li>Ship and operate secure, high-reliability services using Infrastructure-as-Code (IaC) practices and GitOps workflows</li>
</ul>
<ul>
<li>Apply deep expertise in threat modeling and risk assessment to secure complex multi cloud environments</li>
</ul>
<ul>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5-15+ years of software engineering experience implementing and maintaining critical systems at scale</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science/Software Engineering or equivalent industry experience</li>
</ul>
<ul>
<li>Strong software engineering skills in Python or at least one systems language (Go, Rust, C/C++)</li>
</ul>
<ul>
<li>Experience managing infrastructure at scale with DevOps and cloud automation best practices</li>
</ul>
<ul>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
</ul>
<ul>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<ul>
<li>Outstanding communication skills, translating technical concepts effectively across all organizational levels</li>
</ul>
<ul>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
</ul>
<ul>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<ul>
<li>Low ego, high empathy engineer who attracts talent and supports diverse, inclusive teams</li>
</ul>
<ul>
<li>Experience supporting fast-paced startup engineering teams</li>
</ul>
<ul>
<li>Passionate about AI safety and alignment, with keen interest in making AI systems more interpretable and aligned with human values</li>
</ul>
<p>Salary: The annual compensation range for this role is £240,000-£325,000 GBP.</p>
<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: £240,000-£325,000 GBP Required Skills:</p>
<ul>
<li>Cloud security architecture</li>
<li>IAM</li>
<li>Network segmentation</li>
<li>Encryption controls</li>
<li>Kubernetes security</li>
<li>Container orchestration</li>
<li>Identity management</li>
<li>Infrastructure-as-Code (IaC)</li>
<li>GitOps</li>
<li>Threat modeling</li>
<li>Risk assessment</li>
<li>DevOps</li>
<li>Cloud automation</li>
<li>Python</li>
<li>Go</li>
<li>Rust</li>
<li>C/C++</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Secure-by-design workflows</li>
<li>CI/CD pipelines</li>
<li>Secure cloud infrastructure</li>
<li>Cloud environments</li>
<li>Containerization</li>
<li>Identity and access management</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000-£325,000 GBP</Salaryrange>
      <Skills>Cloud security architecture, IAM, Network segmentation, Encryption controls, Kubernetes security, Container orchestration, Identity management, Infrastructure-as-Code (IaC), GitOps, Threat modeling, Risk assessment, DevOps, Cloud automation, Python, Go, Rust, C/C++, Secure-by-design workflows, CI/CD pipelines, Secure cloud infrastructure, Cloud environments, Containerization, Identity and access management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5022845008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c98acbb-1ba</externalid>
      <Title>Senior Staff Engineer, Payments Compliance</Title>
      <Description><![CDATA[<p>We are looking for a seasoned technical leader to join our Payments Compliance team as a Senior Staff Engineer. In this role, you will be responsible for owning the technical vision and architectural direction across the full Compliance engineering landscape, spanning Policy Enforcement, Identity, Screening, Auditing, and Compliance Experience.</p>
<p>As a Senior Staff Engineer, you will serve as the connective tissue across Compliance&#39;s multi-year strategic initiatives, defining how components fit together, identifying where capabilities can be shared rather than duplicated, and ensuring we leverage platform investments from partner teams.</p>
<p>Your decisions will directly affect how Airbnb meets obligations such as Anti-Money Laundering (AML), Know Your Customer (KYC), and sanctions screening while minimizing operational cost and customer friction.</p>
<p>This role extends well beyond the Compliance organization itself. You will partner directly with cross-organizational engineering teams, as well as cross-functional stakeholders across Product, Content, Legal, and Design.</p>
<p>The technical choices you make carry direct financial and legal exposure , requiring the judgment, depth of expertise, and organizational credibility to drive high-stakes design tradeoffs across team boundaries.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the end-to-end system design vision for the Compliance organization, setting the architectural direction for an engineering organization of nearly 30 engineers across multiple teams.</li>
<li>Driving foundational architectural shifts, including the move from an account-centric model to a customer-centric one , rethinking how we model identity, risk, and enforcement across the platform.</li>
<li>Leading the technical strategy for expanding KYC capabilities , supporting small and medium businesses, reimagining business onboarding and account structures, enabling KYC through third-party APIs, and extending verification to third-party payees.</li>
<li>Architecting systems that adapt to the evolving digital identity landscape , including new verification standards, government-issued digital credentials, and shifting privacy regulations , without requiring costly re-platforming.</li>
<li>Ensuring our technical foundations are flexible enough to absorb unforeseen regulatory mandates with aggressive timelines, without destabilizing existing systems or requiring disproportionate engineering investment.</li>
<li>Partnering with technical leaders across multiple organizations to drive alignment on shared capabilities and cohesive system design, reducing duplication and compounding technical debt.</li>
<li>Working closely with Product, Design, Policy, Legal, Operations, and other cross-functional partners as part of a globally distributed team to define and ship impactful features.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>15+ years of technical experience, with 10+ years of relevant industry experience in a fast-paced tech environment.</li>
<li>Prior knowledge of Regulatory Compliance standards (AML, KYC, sanctions screening) and demonstrated experience designing and implementing those controls at scale.</li>
<li>Proven track record of setting technical direction and architectural strategy for a large engineering organization, with the ability to drive alignment across organizational boundaries.</li>
<li>Experience designing systems that span multiple teams and domains, with a focus on cohesion, reusability, and long-term maintainability over initiative-by-initiative solutions.</li>
<li>Excellent communication skills and the ability to influence senior technical and non-technical stakeholders across the company.</li>
<li>Strong problem solver with deep experience operating and leading on-call for production systems at scale.</li>
<li>Technical leadership: hands-on experience leading large project teams, making high-stakes design tradeoffs, and translating regulatory requirements into scalable system architectures.</li>
<li>BS/MS/PhD in Computer Science, a related field, or equivalent work experience.</li>
<li>Proficiency in one or more back-end server languages (Java/Ruby/Go/C++/etc.)</li>
<li>Deep understanding of architectural patterns of high-scale web and data applications.</li>
<li>Be future looking , we might be focused on immediate regulations, but need to build for the long term. You think in terms of platforms, not projects.</li>
<li>End-to-end ownership mentality that transcends team boundaries, with the credibility and judgment to make decisions that carry direct financial and legal implications.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Regulatory Compliance, Anti-Money Laundering, Know Your Customer, Sanctions Screening, System Architecture, Technical Leadership, Cloud Computing, Containerization, Microservices, API Design, Security, Identity and Access Management, DevOps, Agile Methodologies, Scrum, Kanban, Continuous Integration, Continuous Deployment, Continuous Testing, Automation, Artificial Intelligence, Machine Learning, Data Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces for short-term rentals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7688467</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9d5360b-229</externalid>
      <Title>Staff Platform Engineer - Infra + DevOps</Title>
      <Description><![CDATA[<p>We&#39;re looking for a seasoned Platform Engineer to join our team. As a leader in aging care innovation, Honor provides technology, tools, and services that empower older adults to live life on their own terms. Our platform engineering team builds and manages the infrastructure &amp; core services that powers Honor&#39;s Care Platform. We&#39;re seeking someone with at least 6 years of professional experience in a platform engineering team within a product-centric company. You will be responsible for designing, implementing, and maintaining scalable distributed systems &amp; infrastructure. Your expertise should include cloud platforms, advanced software design patterns &amp; architecture, operations and automation, and containerization technologies like Kubernetes. You will be joining a small team of highly-skilled, enthusiastic, and passionate engineers with an opportunity to create an outsized impact in contributing to the future evolution of Honor&#39;s Care Platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement foundational patterns and libraries for Python applications, across a range of technologies from API services to event processing</li>
<li>Utilize Infrastructure as Code (IaC) tools to ensure reproducible and scalable environment setups</li>
<li>Design and implement infrastructure for applications hosted on AWS, supporting event-driven systems, containerized services on Kubernetes, and serverless functions</li>
<li>Develop and maintain robust CI/CD pipelines using tools such as Jenkins, ArgoCD</li>
<li>Have experience automating the lifecycle management of code from development through production, including code promotion and configuration management</li>
<li>Instrument observability through tools such as CloudWatch and DataDog to monitor and optimize application performance across multiple environments</li>
<li>Scale infrastructure to meet increasing demand while managing cost effectively</li>
<li>Have experience defining, instrumenting and measuring standards for quality, security, scalability, and availability with a focus on delivering business value</li>
<li>Have passion for delivering turn-key developer experience for local development</li>
<li>Keen interest in developing talent through mentorship</li>
<li>Strong written and verbal communication, tailored to a variety of audiences</li>
<li>A strategic thinker with a product-first approach and customer obsession</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 6 years of professional experience in a platform engineering team within a product-centric company</li>
<li>Experience working with an RPC architecture</li>
<li>Experience working at or having worked at a technology startup and familiar with the challenges of evolving platform maturity</li>
<li>First-hand experience navigating multiple distributed architecture patterns</li>
</ul>
<p>Our range reflects the hiring range for this position. We use national average to determine pay as we are a remote first company. Individual pay is based on a number of factors including qualifications, skills, experience, education, and training. Base pay is just a part of our total rewards program. Honor offers generous equity packages that increase with position level and responsibilities, and a 401K with up to a 4% employer match. We provide medical, dental and vision coverage including zero cost plans for employees. Short Term Disability, Long Term Disability and Life Insurance are fully employer paid with a voluntary additional Life Insurance option. We offer a generous time off program, mental health benefits, wellness program, and discount program.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700-$223,000 USD</Salaryrange>
      <Skills>cloud platforms, advanced software design patterns &amp; architecture, operations and automation, containerization technologies like Kubernetes, Infrastructure as Code (IaC), AWS, event-driven systems, serverless functions, CI/CD pipelines, Jenkins, ArgoCD, observability, CloudWatch, DataDog, quality, security, scalability, availability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults. Its portfolio includes Home Instead, Inc., the world&apos;s leading provider of in-home care.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8297124002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9299d24f-de5</externalid>
      <Title>Staff Software Engineer - Artifact Management</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Staff Software Engineer - Artifact Management to join our team. As a Staff Software Engineer, you will be responsible for designing and implementing distributed storage and caching solutions for artifacts, evaluating and exploring third-party solutions, developing APIs and services for artifact publishing, retrieval, and version management, optimizing performance, reliability, and cost efficiency across multi-region deployments, working closely with build, release, and infrastructure teams to ensure seamless integration into developer workflows, driving observability, automation, and resilience in a high-traffic production environment by creating dashboards, metrics, and alerts, diagnosing and resolving system bottlenecks, storage issues, and dependency-related failures, driving and implementing best practices in artifact creation and lifecycle management, growing, changing, investing in your teammates, being invested-in, sharing your ideas, listening to others, being curious, having fun, and being yourself.</p>
<p>The ideal candidate will have a minimum of 7 years of experience in a software or infrastructure engineering industry, deep experience operating services in production and at scale, proficiency in Go as your primary programming language, strong experience with infrastructure-as-code, CI/CD systems (e.g., GitHub Actions, ArgoCD), and containerization (e.g., Docker, Kubernetes), expertise in leading scale system design, scalability, and efficiency, experience with third-party vendors like Artifactory, and passion for improving developer experience and enabling other engineers to do their best work.</p>
<p>In addition to the required skills, preferred skills include experience integrating or enabling tools that leverage LLMs or code intelligence for developers (e.g., GitHub Copilot, Cody, custom LLM integrations), experience with KubeVirt, KataContainers, and experience with LangGraph/LangChain.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, Infrastructure-as-code, CI/CD systems, Containerization, Leading scale system design, Scalability, Efficiency, Third-party vendors, Artifactory, LLMs or code intelligence, GitHub Copilot, Cody, KubeVirt, KataContainers, LangGraph/LangChain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4612032006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c916726e-d71</externalid>
      <Title>Principal Software Engineer (Networking) - Platform</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer (Networking) - Platform, you will lead technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>
<p>Collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>
<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>
<p>You will take an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>
<p>You will collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>
<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>
<p>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. We want to hear about your customer-first approach in solving operational problems for both today and the future.</p>
<p>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>
<p>You have designed and built a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform</p>
<p>You have built Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</p>
<p>You have written product features or functions in Golang or other programming languages.</p>
<p>You have worked with containerized services (such as Docker).</p>
<p>You have proven results in leading and improving cross-team engineering initiatives.</p>
<p>You have experience in system administration with professional skills in Linux on distributed systems at scale.</p>
<p>You have diagnosed or designed, implemented and created solutions with the Elastic Stack.</p>
<p>You are experienced in a self-organizing and sharing in a globally distributed team environment.</p>
<p>You strengthen team members in bringing out the best of each other by uplifting others with coaching and mentoring.</p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $189,800-$232,900 USD. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$189,800-$232,900 USD</Salaryrange>
      <Skills>Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Linux, Distributed Systems, Elastic Stack, Infrastructure-as-Code, Crossplane, Terraform, Kubernetes, Containerized Services, Docker, System Administration, Golang, Programming Languages, SaaS Product Development, Kubernetes-at-Scale Infrastructure, Automation, Self-Organizing Team Environment, Coaching and Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7565185</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>987aab7f-f67</externalid>
      <Title>Principal Solutions Architect</Title>
      <Description><![CDATA[<p>As a Principal Solutions Architect in GitLab&#39;s global Solutions Architecture Center of Excellence, you&#39;ll be the trusted technical advisor and pre-sales partner who helps customers unlock the full value of GitLab&#39;s AI-powered DevSecOps platform.</p>
<p>You will solve complex challenges across the software lifecycle by connecting GitLab, AI agents, security, and cloud-native capabilities to real business outcomes, guiding customers through digital transformation and modern software delivery.</p>
<p>Reporting into the Senior Director and acting as the AI subject matter expert on a team of specialists, you&#39;ll own technical strategy for strategic accounts, lead value stream and Proof of Value (PoV) engagements, and serve as the technical &#39;CTO&#39; for your accounts.</p>
<p>In your first year, you&#39;ll be focused on driving successful platform evaluations and adoption as part of the pre-sales process, shaping AI-led solution architectures, influencing product direction with field feedback, and creating reusable assets and providing thought leadership for raising GitLab&#39;s technical bar globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead technical discovery, architecture design, demos, and end-to-end evaluations (POC/POV) to validate GitLab as the preferred agentic, AI-powered DevSecOps platform for prospects and customers.</li>
</ul>
<ul>
<li>Drive AI-focused solution strategy as the team&#39;s AI subject matter expert, including competitive positioning and business value justifications.</li>
</ul>
<ul>
<li>Own the technical strategy and influence Customer Success Plans for assigned accounts, acting as the &#39;technical CTO&#39; to guide multi-team, multi-year transformation initiatives across the DevSecOps lifecycle.</li>
</ul>
<ul>
<li>Collaborate with Sales, Customer Success, Product Management, Engineering, and Marketing to shape account strategies, inform territory planning, and ensure successful platform adoption.</li>
</ul>
<ul>
<li>Provide advanced technical guidance during the pre-sales cycle, including tender and audit support, workshop design, and solving complex integration and implementation challenges.</li>
</ul>
<ul>
<li>Serve as the voice of the customer by translating real-world feedback into product requirements, documentation improvements, and roadmap input, especially for AI, security, and platform capabilities.</li>
</ul>
<ul>
<li>Create and share reusable technical assets such as reference architectures, working examples, best practice guides, and internal enablement content to scale impact across regions.</li>
</ul>
<ul>
<li>Mentor other Solutions Architects, contribute to global initiatives for the Center of Excellence, and act as an external industry authority through thought leadership, standards participation, and ecosystem relationships.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Expert resonance for the most strategic aspects of GitLab&#39;s product and customer personas while empowering the field with domain knowledge.</li>
</ul>
<ul>
<li>Deep hands-on expertise with AI, such as designing or implementing AI-powered solutions, advising on AI adoption, or acting as an AI subject matter expert for customers or internal teams.</li>
</ul>
<ul>
<li>Experience in technical pre-sales, software consulting, or similar roles where you connect complex technology to business outcomes.</li>
</ul>
<ul>
<li>Practical background in modern software development or operations, including CI/CD, DevSecOps practices, and related tooling.</li>
</ul>
<ul>
<li>Knowledge of cloud computing concepts and architectures, and how cloud services integrate into secure, scalable application delivery.</li>
</ul>
<ul>
<li>Ability to design and explain technical architectures that span multiple teams and phases of the software lifecycle, from planning through monitoring.</li>
</ul>
<ul>
<li>Skill in leading technical evaluations and workshops (for example, proofs of value or solution design sessions) with diverse stakeholders, from engineers to executives.</li>
</ul>
<ul>
<li>Strong communication, relationship-building, and stakeholder management skills, with the ability to act as a trusted advisor and customer advocate across sales, product, and engineering teams.</li>
</ul>
<ul>
<li>Openness to learning and growth, with experience building new skills over time; candidates with transferable experience in adjacent domains (for example security, data, or cloud architecture) are encouraged to apply.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>This role sits within GitLab&#39;s global Solutions Architecture Center of Excellence, our distributed team of subject matter experts focused on AI, application security, and monetization.</p>
<p>Our mission is to accelerate GitLab&#39;s market leadership by helping shape how customers adopt GitLab and partnering with Sales, Product, and Engineering to drive successful platform outcomes.</p>
<p>We collaborate asynchronously across regions, sharing best practices, reusable assets, and field insights that influence product direction and go-to-market motions.</p>
<p>As an AI-focused Solutions Architect on our team, you&#39;ll help tackle complex customer challenges around AI adoption, security, and value realization, while contributing to the technical standards, frameworks, and thought leadership that support GitLab&#39;s most strategic accounts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$138,600-$297,000 USD</Salaryrange>
      <Skills>AI, DevSecOps, Cloud Native, CI/CD, DevOps, Cloud Computing, Technical Architecture, Solution Design, Pre-Sales, Software Consulting, Machine Learning, Data Science, Security, Cloud Security, Containerization, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting their platform.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8341795002</Applyto>
      <Location>Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8092b6e-7f5</externalid>
      <Title>Bare Metal Support Engineer</Title>
      <Description><![CDATA[<p>As a Bare Metal Support Engineer at CoreWeave, you will be responsible for supporting, operating, and maintaining CoreWeave&#39;s extensive GPU fleet across our growing data centers in the U.S., Europe, and beyond.</p>
<p>You will work closely with customers, data center technicians, and engineering teams to ensure the reliability, performance, and scalability of our infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Providing high-level support for customers utilizing bare-metal GPU fleets on CoreWeave Cloud.</li>
<li>Diagnosing, triaging, and investigating reported customer issues and high-priority incidents, identifying root causes and escalating when necessary.</li>
<li>Developing a deep understanding of customer workloads and use cases to provide tailored technical support.</li>
<li>Coordinating remote troubleshooting and hardware interventions with Data Center Technicians.</li>
<li>Creating and maintaining internal documentation, including troubleshooting guides, best practices, and knowledge base articles.</li>
<li>Participating in an on-call rotation to support production clusters and ensure operational reliability.</li>
<li>Collaborating with engineering teams to improve hardware reliability, software stability, and system performance.</li>
<li>Implementing automation and scripting to streamline support workflows and reduce manual interventions.</li>
<li>Performing in-depth log analysis and debugging across multiple layers of the stack (firmware, drivers, hardware).</li>
<li>Providing feedback to internal teams on common support issues to drive continuous improvements.</li>
<li>Working with networking teams to troubleshoot connectivity issues affecting customer workloads.</li>
<li>Supporting supercomputing infrastructure running GPU workloads at scale.</li>
<li>Driving operational excellence by refining internal processes and support methodologies.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Experience in data centers, GPU clusters, server deployments, system administration, or hardware troubleshooting.</li>
<li>Demonstrated experience driving resolutions and continuous improvements across cross-functional environments and teams within a data center environment.</li>
<li>Intermediate knowledge of Linux (Ubuntu, CentOS, or similar), including command-line proficiency.</li>
<li>Experience with NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing (HPC), and large-scale data center environments.</li>
<li>Experience in networking fundamentals (TCP/IP, VLANs, DNS, DHCP) and troubleshooting tools.</li>
<li>Hands-on experience with firmware updates, BIOS configurations, and driver management.</li>
<li>Experience analyzing system logs and debugging issues across firmware, drivers, and hardware layers.</li>
<li>Experience working with Jira, Confluence, Notion, or other issue-tracking and documentation platforms.</li>
<li>Experience in scripting and automation (Python, Bash, Ansible, or similar).</li>
</ul>
<p>If you&#39;re a curious and analytical individual with a passion for problem-solving and a desire to work in a fast-paced environment, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$83,000 to $132,000</Salaryrange>
      <Skills>Linux, GPU clusters, server deployments, system administration, hardware troubleshooting, NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing, large-scale data center environments, networking fundamentals, troubleshooting tools, firmware updates, BIOS configurations, driver management, system logs, debugging issues, Jira, Confluence, Notion, issue-tracking, documentation platforms, scripting, automation, Kubernetes, Docker, containerized infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that delivers a platform of technology, tools, and teams to enable innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4560350006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce541b1a-167</externalid>
      <Title>Senior Technical Account Manager - Auth0</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p><strong>The Team</strong></p>
<p>Technical Account Management (TAM) is a global team that owns Auth0 customer success within Okta’s broader Customer Success team. We collaborate with Auth0’s customers to share knowledge, and best practices and make recommendations to continuously innovate around identity and security.</p>
<p>As our customer’s strategic identity coaches, we are Auth0 product experts, and we enable Auth0&#39;s worldwide growth by educating existing customers and ensuring they are happy and successful.</p>
<p>We share our technical and product expertise with customers through presentations, demonstrations, technical evaluations, and ongoing recommendations on Auth0 and industry best practices.</p>
<p><strong>The Opportunity</strong></p>
<p>A TAM specializing in enterprise identity, including the Auth0 product and adjacent technologies. The TAM will provide Okta’s customers with strategic technical guidance over the comprehensive suite of products and features available at Okta.</p>
<p>They are held in high regard as a technical expert for how Okta’s solutions translate to business value. They are also held in high regard for their ability to understand the code that makes up identity authentication pipelines, Auth0, after all, is developer-friendly.</p>
<p>The TAM specialization calls for an understanding of hybrid scenarios that capitalize on Auth0’s ability to manage authentication, authorization, and lifecycle management capabilities for consumer SaaS, business-to-consumer (B2C), and general CIAM applications.</p>
<p>The opportunity is that as an Auth0 TAM you will get to guide some of the world&#39;s largest companies in their strategic identity journey at the same time as being an Auth0 champion!</p>
<p><strong>What you’ll be doing</strong></p>
<p>Fully own the account management function as an Auth0 TAM. This includes the business and the technical side</p>
<p>Advise customers on best practices and product adoption in a post-sales capacity</p>
<p>Be comfortable with a number of personas including but not limited to CISO, Product Owner, CMO, developers, etc., with an account portfolio of strategic accounts</p>
<p>Have a deep interest in the security space and where the industry is headed particularly from a CIAM perspective.</p>
<p>Earn customer trust by understanding their goals and use cases, and recommend best practices relating to process changes, product adoption, configuration, and additional features to meet requirements</p>
<p>Maintain focus on increasing subscription adoption, customer satisfaction, and retention</p>
<p>Review customer architectures and Auth0 configurations to ensure they are enhancing security posture and capturing ROI as Auth0 releases new features and functionality</p>
<p>Establish strong personal relationships on key accounts with decision-makers and stakeholders</p>
<p>Establish strong relationships internally, too as part of a larger collaborative team</p>
<p>Participate in content creation for both internal and external enablement of staff and customers</p>
<p><strong>What you’ll bring to the role</strong></p>
<p>7+ years of total experience in information technology, with at least 3 years of hands-on experience as a Technical Account Manager (TAM) or comparable practitioner role in the IAM space</p>
<p>Working proficiency in the following core IAM areas:</p>
<p>Technologies and protocols to support identity federation and robust access control models, including concepts such as SAML 2.0, WS-Federation, OAuth, OpenID Connect, etc.</p>
<p>Legacy applications in a hybrid IT environment with non-standard applications (i.e. those that do not support modern identity federation protocols)</p>
<p>Enterprise applications in the ecosystem to provide identity and attributes to applications or to harness an external application to help drive business processes (ITSM, HR, etc)</p>
<p>Consumer and/or SaaS application deployments</p>
<p>Security and performance monitoring, and 3rd party signals integrations (SEIM, MDM, WAF, etc)</p>
<p>Familiarity with IAM solution providers is strongly desired.</p>
<p>Strong background in any of the following: Technical Account Management, Technical Consulting, Product Management, Solution Architect, or a similar role</p>
<p>Understanding of common software development practices, including concepts such as SDLC, CI/CD, Containerization, etc.</p>
<p>Ability to code in Javascript</p>
<p>Understanding of identity and surrounding technologies, including concepts such as encryption, PKI, RSA, etc.</p>
<p>Strong business acumen, history of success owning enterprise segment customer relationships and escalations</p>
<p>Excellent communication skills. Ability to set expectations and communicate goals and objectives with customers at various levels, from a developer to a CISO</p>
<p>Ability to track and influence customer behavior and health metrics across a portfolio of accounts</p>
<p>This position will be located in London or Barcelona and will have some travel required (under 50% of the time)</p>
<p>BA/BS/MS or related discipline or equivalent work experience required</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£104,000-£143,000 GBP</Salaryrange>
      <Skills>SAML 2.0, WS-Federation, OAuth, OpenID Connect, Legacy applications, Enterprise applications, Consumer and/or SaaS application deployments, Security and performance monitoring, 3rd party signals integrations, IAM solution providers, Technical Account Management, Technical Consulting, Product Management, Solution Architect, SDLC, CI/CD, Containerization, Javascript, Encryption, PKI, RSA, Business acumen, Communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7614965</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d34bbf18-2b2</externalid>
      <Title>Senior Site Reliability Engineer (FinOps) - Platform</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer (FinOps) - Platform, you will be part of the Platform Engineering department, responsible for designing, building, scaling, and maturing the multi-cloud platform for hosting internal and external services. You will lead technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will also grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Taking an engineering approach in leading technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>
<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</li>
<li>Using an inclusive approach at championing an environment focused on collaboration, operational excellence, and uplifting others.</li>
<li>Responding to and preventing repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>The ideal candidate will have success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. They will have a background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions. An experience in public cloud and managed Kubernetes services is advantageous.</p>
<p>The role requires passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>
<p>Bonus points for experience in operating a SaaS product in a public cloud, building or operating a Kubernetes-at-scale infrastructure, writing non-trivial programs in Golang or other programming languages, working with containerized services, leading and improving alerting and major incident management standard processes metrics systems, and experience in system administration with professional skills in Linux on distributed systems at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud computing, Kubernetes, Golang, Containerization, Linux, System administration, Alerting and incident management, Infrastructure-as-Code, Terraform, Crossplane, Distributed systems, Self-organizing teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic develops a search engine and analytics platform used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7565188</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4ec68a8-fb9</externalid>
      <Title>Manager, Enterprise Security Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a security-focused leader to build and scale world-class defensive controls protecting the infrastructure that supports our defence technology products.</p>
<p>As a Manager, Enterprise Security Engineering, you will lead a high-performing team of security engineers, set technical direction, and establish clear standards for engineering excellence and ownership. You will define and execute the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A, and design and implement security controls across cloud, production, and corporate infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, mentoring, and growing a high-performing team of security engineers</li>
<li>Setting technical direction and establishing clear standards for engineering excellence and ownership</li>
<li>Partnering in hiring, performance management, and career development</li>
<li>Defining and executing the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A</li>
<li>Designing and implementing security controls across cloud, production, and corporate infrastructure</li>
<li>Developing tools and systems to improve security posture and operational efficiency</li>
<li>Conducting security architecture and design reviews for systems and applications</li>
<li>Partnering across infrastructure, IT, product, and security teams to reduce risk while enabling velocity</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Ability to work autonomously, take ownership of projects, and collaborate across teams</li>
<li>Demonstrated ability to translate ambiguous requirements into clear technical roadmaps and delivered outcomes</li>
<li>Have participated in or supported incident response events</li>
<li>Strong programming ability in one or more general-purpose languages (Python, Go, Rust, etc)</li>
<li>Experience with one or more infrastructure as code languages (e.g., Terraform, AWS CDK) in a production capacity</li>
<li>Experience conducting security architecture or design reviews around custom business applications</li>
<li>Strong understanding of modern attack vectors and defensive mitigation strategies</li>
<li>Experience working with cloud platforms and deploying applications through CI/CD pipelines</li>
<li>Experience implementing security controls across endpoints, corporate cloud environments, and internal infrastructure</li>
<li>Eligible to obtain and maintain a U.S. TS clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience building bespoke solutions in high-growth and high-complexity environments</li>
<li>Experience with AWS, Azure, or GCP security ecosystem and tooling</li>
<li>Strong experience with Linux operating systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>security engineering, infrastructure as code, cloud security, endpoint security, M&amp;A security, incident response, security architecture, CI/CD pipelines, Linux operating systems, AWS security ecosystem, Azure security ecosystem, GCP security ecosystem, containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril is a defence technology company that develops and manufactures advanced sensors and systems for military and commercial applications.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5070618007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a585fcb5-07b</externalid>
      <Title>Senior Security Engineer, Enterprise Security</Title>
      <Description><![CDATA[<p>As a Senior Security Engineer, Enterprise Security, you will design and ship the security controls that underpin CoreWeave&#39;s workforce and enterprise stack. You will lead initiatives across identity, access management, device and endpoint security, and SaaS security,partnering closely with IT Engineering, Endpoint, Network, and other security teams.</p>
<p>Your day-to-day will blend hands-on engineering (writing code, building integrations, tuning controls) with architecture and program ownership (setting standards, defining patterns, and driving adoption across teams). You will be responsible for turning high-level objectives,like “implement zero trust for workforce access” or “deploy phishing-resistant MFA at scale”,into concrete designs, automation, and measurable risk reduction.</p>
<p>In this role, you will:</p>
<ul>
<li>Engineer modern identity and access controls</li>
<li>Design, implement, and operate workforce identity solutions (e.g., Okta/Entra and other IdPs) including SSO, MFA, conditional access, and lifecycle automation via SCIM.</li>
<li>Develop and roll out phishing-resistant MFA for high-value accounts and critical access paths (e.g., FIDO2/WebAuthn, hardware keys, device-bound authenticators).</li>
<li>Define and maintain RBAC/IAM patterns for enterprise applications (role models, groups, entitlements, JIT access, and approvals).</li>
</ul>
<ul>
<li>Implement zero trust for workforce and enterprise access</li>
<li>Design and deploy controls that combine user identity, device posture, network context, and application sensitivity to enforce least-privilege access.</li>
<li>Partner with Network and Infrastructure teams to integrate mTLS, service identity, and policy-based access into internal services and admin interfaces.</li>
<li>Help transition from legacy perimeter models to zero trust network access (ZTNA) patterns for employees, contractors, and third parties.</li>
</ul>
<ul>
<li>Secure SaaS and collaboration platforms</li>
<li>Evaluate, onboard, and harden SaaS applications (Google Workspace, Microsoft 365, Slack, HRIS, ticketing, and other business apps) to align with enterprise security policies.</li>
<li>Implement and tune controls such as SCIM provisioning, data access policies, DLP, sharing controls, and audit logging across the SaaS estate.</li>
<li>Partner with business and IT owners to ensure new SaaS applications meet baseline security standards before adoption.</li>
</ul>
<ul>
<li>Harden endpoints and the extended workforce</li>
<li>Collaborate with Endpoint/IT teams to define and enforce baseline configurations for laptops, workstations, and other managed devices via MDM and EDR.</li>
<li>Design secure patterns for contractor and vendor access, including device requirements, identity separation, and time-bound access.</li>
<li>Support investigations and incident response related to identity, endpoint, and SaaS domains.</li>
</ul>
<ul>
<li>Automate and instrument everything you can</li>
<li>Build automation and self-service experiences for access requests, approvals, access reviews, and break-glass workflows.</li>
<li>Develop integrations between IdPs, HRIS, ticketing, and other systems to minimize manual toil and reduce identity-related error rates.</li>
<li>Define and instrument metrics for enterprise security (e.g., MFA coverage, zero trust policy enforcement, joiner/mover/leaver SLA adherence, SaaS posture).</li>
</ul>
<ul>
<li>Partner on detection, response, and governance</li>
<li>Work with Security Operations and SIEM teams to ensure robust visibility into identity, device, and SaaS activity, and to build high-signal detections.</li>
<li>Contribute to policies, standards, and reference architectures that encode enterprise security expectations.</li>
<li>Author clear documentation and runbooks that make it easy for teams to consume and operate the controls you build.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Identity and Access Management, Security Engineering, Zero Trust Architecture, Phishing-Resistant MFA, RBAC/IAM Patterns, SCIM Provisioning, Data Access Policies, DLP, Sharing Controls, Audit Logging, Endpoint Security, MDM, EDR, Automation, Self-Service Experiences, Integrations, Metrics, Enterprise Security, Security Operations, SIEM, Policies, Standards, Reference Architectures, Cloud Computing, AI Applications, Containerization, Kubernetes, DevOps, CI/CD Pipelines, Agile Methodologies, Scrum, Kanban, Project Management, Leadership, Communication, Collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653764006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e772a5e2-9a4</externalid>
      <Title>Lead Software Engineer, API/SDK</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our rapidly growing team in Seattle, WA. In this role, you will work on our developer portal and generated SDKs to enable our partners to write complex technical integrations for the Lattice platform.</p>
<p>This position requires deep technical expertise in API design, cloud architecture, and hands-on development experience. If you thrive on solving complex technical challenges, enjoy creating great developer ecosystems, and are passionate about creating mission-critical solutions at scale, then this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work on our developer portal to enhance partner engagement and streamline the integration process</li>
<li>Develop infrastructure to simplify the exposure of APIs and SDKs for external developers</li>
<li>Build and maintain sample applications, SDKs, and technical frameworks that enable partners to implement sophisticated solutions</li>
<li>Provide technical leadership during partner onboarding, guiding their engineering teams through complex integration scenarios</li>
<li>Create proof-of-concept applications and reference architectures that demonstrate advanced Lattice capabilities and integration patterns</li>
<li>Collaborate with engineering teams to influence the platform roadmap based on real-world implementation challenges</li>
<li>Conduct technical reviews of partner architectures and provide recommendations for optimization and scalability</li>
<li>Troubleshoot complex integration issues and provide hands-on technical support for mission-critical deployments</li>
<li>Evangelize best practices for building resilient, secure, and performant applications on the Lattice platform</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience as a Senior Software Engineer with customer-facing responsibilities</li>
<li>Strong programming experience in multiple languages (Python, Java, Go, C++, or similar) with demonstrated ability to build production-grade applications</li>
<li>Deep expertise in distributed systems architecture, including microservices, event-driven architectures, and API gateway patterns</li>
<li>Experience with CI/CD pipelines, infrastructure as code, and DevOps practices</li>
<li>Hands-on experience with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)</li>
<li>Proven track record of designing and implementing complex system integrations in enterprise environments</li>
<li>Experience with API technologies including REST, gRPC, GraphQL, and real-time communication protocols (WebSockets, message queues)</li>
<li>Strong understanding of security patterns, authentication/authorization frameworks, and data protection in distributed systems</li>
<li>Excellent technical communication skills with the ability to present complex architectural concepts to both technical and non-technical stakeholders</li>
<li>Must be a U.S. Person due to required access to U.S. export-controlled information or facilities</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience architecting solutions for defence, aerospace, or other mission-critical industries</li>
<li>Background in edge computing, IoT architectures, or real-time data processing systems</li>
<li>Knowledge of air-gapped environments, offline-first architectures, and high-availability system design</li>
<li>Open source contributions to architectural frameworks or developer tools</li>
<li>Experience mentoring engineering teams and leading technical design reviews</li>
<li>Advanced degree in Computer Science, Engineering, or related technical field</li>
</ul>
<p>Salary Range: $191,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>API design, cloud architecture, hands-on development experience, distributed systems architecture, CI/CD pipelines, infrastructure as code, DevOps practices, cloud platforms, containerization technologies, complex system integrations, API technologies, security patterns, authentication/authorization frameworks, data protection, edge computing, IoT architectures, real-time data processing systems, air-gapped environments, offline-first architectures, high-availability system design, open source contributions, mentoring engineering teams, leading technical design reviews</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4754841007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f838587f-1ee</externalid>
      <Title>Software Engineer, Kubernetes</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our team and help us build and scale our Kubernetes environment. As a Software Engineer, you will play a key part in ensuring the availability, reliability, and scalability of our cloud infrastructure. You will drive operational excellence, implement robust automation, and help shape the systems that keep our cloud running smoothly.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build, operate, and scale Kubernetes-based production infrastructure that delivers our products with high reliability and performance.</li>
<li>Develop automation, tooling, and infrastructure as code in Go and other infrastructure-focused languages to enable zero-touch operations, rapid recovery, and seamless deployments.</li>
<li>Design, implement, and maintain monitoring, alerting, and observability solutions,leveraging the Grafana ecosystem and related tools,to proactively identify and resolve production issues.</li>
<li>Drive incident response efforts, participate in on-call rotations, and lead root cause analysis to prevent recurrence and improve incident handling processes.</li>
<li>Partner with internal and cross-functional teams to ensure platform capabilities meet rigorous operational requirements and customer SLAs.</li>
<li>Engineer for resiliency, implementing best practices for redundancy, fault tolerance, and disaster recovery across complex distributed systems.</li>
<li>Advocate for security, reliability, and performance improvements throughout the stack, continuously seeking opportunities to strengthen operational standards.</li>
<li>Contribute to the development of custom Kubernetes operators and intelligent orchestration frameworks that optimize AI workload performance and resource utilization at scale.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in production engineering, SRE, or large-scale infrastructure/platform roles.</li>
<li>Knowledgeable in Kubernetes administration, container orchestration, and microservices architectures, with a bias for automating every aspect of operations.</li>
<li>Proven track record managing high-uptime, customer-facing systems in a fast-moving environment, with experience delivering measurable improvements in reliability and performance.</li>
<li>Experience in monitoring, observability, and incident management using tools like Prometheus, Grafana, Datadog, Splunk, Loki, or VictoriaMetrics.</li>
<li>Deep understanding of Linux systems and infrastructure-focused programming, especially in Go and Bash.</li>
<li>Strong analytical skills and ability to troubleshoot complex production issues.</li>
<li>Excellent communication skills and ability to share knowledge with technical and non-technical stakeholders.</li>
</ul>
<p>What Success Looks Like:</p>
<ul>
<li>Deliver stable, robust, and highly-available systems that consistently meet or exceed uptime and performance targets.</li>
<li>Champion initiatives that drive automation, reduce operational toil, and increase the efficiency of incident response.</li>
<li>Actively contribute to a blameless culture of learning, mentoring others in operational best practices and production engineering principles.</li>
<li>Help CoreWeave maintain industry leadership through flawless execution in supporting demanding, AI-powered workloads at scale.</li>
</ul>
<p>Why CoreWeave?</p>
<ul>
<li>We work hard, have fun, and move fast!</li>
<li>We&#39;re in an exciting stage of hyper-growth that you won&#39;t want to miss out on.</li>
<li>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</li>
<li>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</li>
</ul>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $120,000 to $176,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer:</p>
<ul>
<li>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</li>
<li>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</li>
</ul>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace:</p>
<ul>
<li>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,000 to $176,000</Salaryrange>
      <Skills>Kubernetes administration, container orchestration, microservices architectures, Go, Bash, Linux systems, monitoring, observability, incident management, Prometheus, Grafana, Datadog, Splunk, Loki, VictoriaMetrics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577764006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d2be5ba1-459</externalid>
      <Title>Senior Software Engineer, Box-Office</Title>
      <Description><![CDATA[<p>We are seeking a Senior Full-Stack Engineer to join the Box-Office Team to help us build, run and refine our automation reliability, efficiency, scalability and user experience. As a member of the Box-Office Team, you would have the opportunity to develop applications, user experiences, reporting automations, API endpoints, and other fleet management tooling and UIs.</p>
<p>Assess and improve existing User Interfaces, adding new features and simplifying the user experience. Resolve integration challenges between our infrastructure and our vendors&#39; APIs, focusing on making the interactions plain to understand and robust.</p>
<p>Design and implement solutions to fascinating problems at scale for multi-site deployment and management of CoreWeave&#39;s global hardware fleet, including interfaces for managing the deployment and maintenance of said fleet.</p>
<p>Create test plans, deployment automation, dashboards, alerts, and insights into our fleet operations as well as participate in the Fleet Provisioning Automation on-call rotation.</p>
<p>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>container-based microservices, open-source tools, automation, UI development, React, Typescript, Golang</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4610724006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>248927c8-76d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are looking for software engineers to join our Platform organisation. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and optimise the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines.</li>
<li>Partner closely with product teams to understand their development workflow and eliminate friction points.</li>
<li>Work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don&#39;t.</li>
</ul>
<p><strong>Platform Acceleration</strong></p>
<p>We work on maximising the developer productivity of product engineers at Anthropic. You&#39;ll help define performance quality and standard for the company, power the next gen of LLM-first products, and redefine best-in-class developer experience.</p>
<p><strong>Service Infra</strong></p>
<p>We build and maintain the core infrastructure that powers Anthropic&#39;s engineering organisation, from service mesh and observability systems to deployment pipelines and shared libraries.</p>
<p><strong>Multicloud</strong></p>
<p>We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers. We focus on cloud-agnostic tooling, cross-cloud networking, and multi-region deployments.</p>
<p><strong>Auth &amp; Identity</strong></p>
<p>We build and maintain the critical infrastructure that powers identity and authentication across Anthropic&#39;s product suite. We work closely with product teams, security, support, and trust &amp; safety as customers.</p>
<p><strong>Connectivity</strong></p>
<p>Our mission is to make Claude the most connected AI. We own the MCP proxy that routes every tool call and the OAuth and token management that keeps connections authenticated.</p>
<p><strong>API Distributability</strong></p>
<p>The Claude API today is a rapidly growing platform serving developers and enterprises at scale,but reaching the next tier of enterprise customers requires transforming how and where we deploy it.</p>
<p><strong>Platform Intelligence</strong></p>
<p>We build the training systems that adapt Claude to specific customer workloads. The core problem is task-specific adaptation: getting the right intelligence, cost, and latency profile for a particular use case, and building toward systems where that adaptation can deepen as the customer&#39;s usage grows.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>A minimum of 5 years of practical experience building backend product or platform systems,distributed systems, cloud-native products, developer tools, or external developer facing products.</li>
<li>Strong fundamentals in service-oriented architectures, networking, and systems design.</li>
<li>Proficiency in Python, Go, Rust, or similar systems languages.</li>
<li>Experience with cloud infrastructure (GCP, AWS, or Azure), container orchestration (Kubernetes), and/or multi-cloud networking.</li>
<li>Take full ownership of your work,from design through deployment and operations.</li>
<li>Can navigate ambiguity and make sound technical decisions independently,</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual compensation range: $320,000-\$320,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time.</li>
<li>Visa sponsorship: We do sponsor visas!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$320,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, cloud infrastructure, container orchestration, multi-cloud networking, service-oriented architectures, networking, systems design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157844008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70a6eadc-7c1</externalid>
      <Title>Security Programs - Technical Program Manager</Title>
      <Description><![CDATA[<p>We are seeking a Security Technical Program Manager to join our Product Engineering organization. As a Security Technical Program Manager, you will work across cross-functional teams to ensure our cloud infrastructure is secure and private, while maintaining scalability and delivery of exceptional performance to meet the demands of our customers.</p>
<p>The ideal candidate will have 8+ years of hands-on experience in Security Technical Program Management, Security Strategy, Security Risk Management and/or Security Compliance roles, ideally within the cloud services industry. They will have a Bachelor&#39;s degree in Information Security, Computer Science, or a related field or equivalent job experience.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead end-to-end program management for critical security engineering and security compliance initiatives, including cross-functional planning, execution, delivery, and retrospectives</li>
<li>Define program scope, milestones, and success metrics while managing security risks and dependencies</li>
<li>Partner closely within the security team, and across engineering, product management and operations teams to ensure alignment on priorities and deliverables</li>
<li>Act as the primary point of contact for security and cross-functional stakeholders, providing regular status updates, addressing risks, and ensuring accountability</li>
<li>Facilitate and influence technical security, privacy and compliance discussions and decisions to align with long-term infrastructure goals and business objectives</li>
<li>Develop and implement scalable processes to improve efficiency and predictability in program delivery</li>
<li>Strategically automate and improve day-to-day operations, processes and reporting</li>
<li>Tailor communications to a diverse audience and remain adaptable to a wide range of personalities and technical depth</li>
</ul>
<p>What We Offer:</p>
<ul>
<li>Competitive salary range of $122,000 to $237,000</li>
<li>Discretionary bonus, equity awards, and a comprehensive benefits program</li>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace:</p>
<ul>
<li>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,000 to $237,000</Salaryrange>
      <Skills>Security Technical Program Management, Security Strategy, Security Risk Management, Security Compliance, Cloud Services, Program Management, Cross-Functional Team Collaboration, Communication, Adaptability, Technical Security, Privacy, Compliance, Networking, Storage, Containerization (Kubernetes), CI/CD Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4556342006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a7635f5-a02</externalid>
      <Title>Principal Software Engineer (Networking) - Platform</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer (Networking) - Platform, you will be part of the Platform Engineering department, responsible for crafting, building, and improving the multi-cloud platform at scale for Elastic Cloud Hosted and Serverless. You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Taking an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>
<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling, and automations.</li>
<li>Collaborating in an environment with an inclusive approach, and focusing on operational perfection which uplifts others.</li>
<li>Preventing repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years in Software Engineering with product success in delivering Cloud network solutions.</li>
<li>Experience in public cloud, Go, and managed Kubernetes services is advantageous.</li>
<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>
<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>
</ul>
<p>Bonus points include:</p>
<ul>
<li>Designing and building a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>
<li>Building Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>
<li>Writing product features or functions in Golang or other programming languages.</li>
<li>Working with containerized services (such as Docker).</li>
<li>Proven results in leading and improving cross-team engineering initiatives.</li>
<li>Experience in system administration with professional skills in Linux on distributed systems at scale.</li>
<li>Diagnosing or designing, implementing, and creating solutions with the Elastic Stack.</li>
<li>Experienced in a self-organizing and sharing in a globally distributed team environment.</li>
<li>Strengthening team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Infrastructure-as-Code, Crossplane, Terraform, Golang, Containerized Services, Docker, System Administration, Linux, Distributed Systems, Kubernetes, Automation, Inclusive Communication, Coaching and Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Its search AI platform is used by over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7713597</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9a2bbb70-2c0</externalid>
      <Title>Senior Software Engineer - Data Platform</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our team in Bengaluru, India. As a Senior Software Engineer at Databricks, you will be responsible for designing, developing, and deploying large-scale distributed systems, including backend, DDS, and full-stack engineering. You will work closely with our product management team to bring great user experiences to our customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop reliable and high-performance services and client libraries for storing and accessing large amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>
<li>Build scalable services using Scala, Kubernetes, and data pipelines, such as Apache Spark and Databricks.</li>
<li>Work on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Collaborate with our DDS team to develop and deploy data-centric solutions using Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Develop and maintain high-quality code, following best practices and coding standards.</li>
<li>Participate in code reviews and provide feedback to improve the quality of the codebase.</li>
<li>Troubleshoot and resolve issues that arise during deployment and operation.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language.</li>
<li>Experience developing large-scale distributed systems from scratch.</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Strong understanding of software design patterns and principles.</li>
<li>Excellent problem-solving skills and attention to detail.</li>
<li>Ability to work effectively in a team environment.</li>
<li>Strong communication and collaboration skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Knowledge of cloud-based storage systems, such as AWS S3 and Azure Blob Store.</li>
<li>Familiarity with containerization using Docker and Kubernetes.</li>
<li>Experience with continuous integration and continuous deployment (CI/CD) pipelines.</li>
<li>Strong understanding of security principles and practices.</li>
<li>Familiarity with agile development methodologies and version control systems, such as Git.</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>Databricks is an equal opportunities employer and welcomes applications from diverse candidates. We are committed to creating an inclusive and respectful work environment where everyone feels valued and empowered to contribute their best work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Apache Spark, Data Plane Storage, Delta Lake, Delta Pipelines, Kubernetes, Docker, Git, Agile development methodologies, Version control systems, Cloud-based storage systems, Containerization, Continuous integration and continuous deployment (CI/CD) pipelines, Security principles and practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7601580002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eef55d3d-bf0</externalid>
      <Title>Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Job Title: Cloud Deployment Engineer, Space</p>
<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>ABOUT THE JOB</strong></p>
<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>
<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>
<p><strong>REQUIRED QUALIFICATIONS</strong></p>
<ul>
<li>5+ years of working experience in DevOps or SRE type roles</li>
<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>
<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>
<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>
<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>
<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>
<li>Strong problem-solving skills and the ability to work well under pressure</li>
<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>
<li>Experience deploying complex and scalable infrastructure solutions</li>
<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p><strong>PREFERRED QUALIFICATIONS</strong></p>
<ul>
<li>Extensive expertise in Kubernetes and Helm</li>
<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>
<li>Cisco Certified Network Associate (CCNA)</li>
<li>Experience with government Cyber certification processes</li>
<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>
<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>
<li>Military service background (particularly with Space experience)</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>
<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<ul>
<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>
<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>
<li>Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development.</li>
<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0c258f-1f6</externalid>
      <Title>Engineering Manager II, Enterprise AI Solutions</Title>
      <Description><![CDATA[<p>We are seeking a Business Savvy Engineering Manager to help define the future of Corporate IT&#39;s AI-based future at Pinterest. Working closely with cross-functional engineering teams and business leaders, you will lead a nimble team playing a pivotal role in scaling Corporate IT&#39;s engineering department.</p>
<p>As an Engineering Manager, you will guide your team in designing and building the solutions that make our business partners&#39; jobs easier, faster, and more capable. You will grow and empower engineers while shaping how we build Pinterest&#39;s AI future.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead a team of employees and contractors focused on solving business problems using AI tools.</li>
<li>Work closely with the existing software engineering teams to develop a seamless and low-friction client experience.</li>
<li>Mentor junior engineers to help them grow and develop into the best that they can be.</li>
<li>Motivate and lead your team to show up every day and do their best work.</li>
<li>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience leading and growing engineering teams, with a strong hands-on background in Python.</li>
<li>7+ years of industry experience designing, building, and operating scalable, highly available backend systems, including owning production-grade infrastructure at scale.</li>
<li>Proficiency in designing and delivering AI-based solutions that solve real-world business problems.</li>
<li>Understanding of business unit challenges and problems, focused on Finance, Accounting, Legal, Sales, and Marketing.</li>
<li>Experience with cloud infrastructure on AWS and containerized services using Docker and Kubernetes.</li>
<li>Demonstrated technical leadership and people management experience, including setting team vision and long-term roadmap, mentoring and growing engineers across all levels, driving day-to-day execution and engineering alignment, and partnering cross-functionally to deliver complex, high-impact platform investments.</li>
<li>Demonstrated ability to use AI to accelerate team execution, system design, and decision-making, paired with sound judgment in validating outputs, maintaining quality, and taking ownership of final outcomes.</li>
<li>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</li>
</ul>
<p>In-Office Requirement Statement:</p>
<ul>
<li>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</li>
<li>This role will need to be in the office for in-person collaboration 1-2 times/quarter, and therefore can be situated anywhere in the country.</li>
</ul>
<p>Relocation Statement:</p>
<ul>
<li>This position is not eligible for relocation assistance.</li>
</ul>
<p>At Pinterest, we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Python, AI, Cloud infrastructure, Containerized services, Docker, Kubernetes, Data lake storage, Metadata management, Big data, ML/AI innovations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494960</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c73d22c6-873</externalid>
      <Title>Senior Software Engineer, (Golang, K82 &amp; CI- Build Services)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI.</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work.</p>
<p>We&#39;re all in on this mission.</p>
<p>If you are too, let&#39;s talk.</p>
<p><strong>What You&#39;ll Own:</strong></p>
<ul>
<li>Unified Build Architectures: Design and implement modular, reusable build stages that define how all code at Okta is tested, secured, and packaged.</li>
<li>Systems Innovation: Solve deep scaling bottlenecks (e.g., Monorepo segmentation, dependency resolution) to accelerate thousands of developers.</li>
<li>Infrastructure as Code: Own the delivery of highly available build agents and artifact registries using Golang, Terraform, and AWS.</li>
<li>Engineering Excellence: Champion &#39;Build-it-once&#39; philosophies, creating self-healing systems that reduce operational toil and eliminate reactive support.</li>
</ul>
<p><strong>What We Are Looking For:</strong></p>
<ul>
<li>Experience: 6+ years in Platform or Infrastructure Engineering, specifically building large-scale CI/Build Platform.</li>
<li>Expertise: Advanced proficiency in Golang for tooling and Terraform for infrastructure orchestration.</li>
<li>Containerization: Mastery of Kubernetes (K8s) and container primitives for build execution.</li>
<li>Scale Mindset: A proven track record of investigating distributed system failures and delivering performant solutions at scale.</li>
<li>Ownership: You don&#39;t just write code; you own the reliability, cost-efficiency, and security guardrails of the entire ecosystem.</li>
</ul>
<p><strong>The Okta Experience</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate.</p>
<p>Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>
<p>We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>
<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>
<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process.</p>
<p>In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>
<p>Okta is committed to complying with applicable data privacy and security laws and regulations.</p>
<p>For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Terraform, Kubernetes, Container primitives, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7810108</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>