<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1975b168-0c1</externalid>
      <Title>DevOps Engineer - EA Sports Technology</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a DevOps Engineer on the EA Sports Technology team, you will design, deploy, and operate solutions in a public cloud environment that integrates with corporate backend infrastructure and services.</p>
<p>This role is ideal for someone with a strong passion for technology and deep experience designing, deploying, and operating solutions in a public cloud environment. You will partner with the Operations Development Director and Technical Director to estimate, plan, and implement projects while delivering exceptional results.</p>
<p>Responsibilities:</p>
<ul>
<li>Design &amp; Architecture: Contribute to the design of secure, scalable, and supportable technical solutions and deployment models that adhere to DevOps security standards while minimizing team toil.</li>
</ul>
<ul>
<li>Engineering Leadership: Participate in sprint planning, conduct regular one-on-one meetings, contribute to engineer skill development reviews and goal setting, and mentor engineers in technical design and advanced troubleshooting methodologies.</li>
</ul>
<ul>
<li>Review &amp; Approval: Oversee the review and approval of internal and partner technical plans, technical briefs, and work plans.</li>
</ul>
<ul>
<li>Operational Support: Support always-on gaming server infrastructure and provide on-call support for feature launches, live events, and emergency escalations.</li>
</ul>
<ul>
<li>Troubleshooting: Resolve complex deployment, connectivity, performance, and availability issues within production and live services environments.</li>
</ul>
<ul>
<li>Automation &amp; Innovation: Lead the development of productivity-enhancing tools and evaluate new technologies to improve team efficiency and platform capabilities.</li>
</ul>
<ul>
<li>Documentation: Maintain comprehensive documentation of system configurations, procedures, and runbooks for live support.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Google Cloud Focus: Six or more years of relevant experience with public cloud services, with a primary emphasis on Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE). Experience with AWS is also acceptable.</li>
</ul>
<ul>
<li>Critical System Design: Proven track record in the design, implementation, and operational support of critical, always-on cloud systems.</li>
</ul>
<ul>
<li>Containerization &amp; DevOps: Expertise in Kubernetes (GKE), Helm, and Docker to manage and scale containerized workloads.</li>
</ul>
<ul>
<li>Automation &amp; Programming: Experience developing automated solutions using Python, Ruby, or Go, as well as proficiency with configuration management tools such as Chef and Ansible.</li>
</ul>
<ul>
<li>Observability &amp; CI/CD: Practical knowledge of monitoring systems such as Prometheus and Grafana, and pipeline automation platforms including Jenkins, GitLab, and GitHub.</li>
</ul>
<ul>
<li>Cloud Security &amp; Networking: Demonstrated expertise in cloud security best practices, Linux server security, and cloud networking, including load balancers, DNS, subnetting, route tables, NAT, and firewalls.</li>
</ul>
<ul>
<li>Cloud Database Management: Working knowledge of database concepts, including RDBMS and NoSQL, with specific experience managing and optimizing Database-as-a-Service offerings.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,300 - $170,700 CAD</Salaryrange>
      <Skills>Google Cloud Platform, Google Kubernetes Engine, Kubernetes, Helm, Docker, Python, Ruby, Go, Chef, Ansible, Prometheus, Grafana, Jenkins, GitLab, GitHub, Cloud security, Linux server security, Cloud networking, Load balancers, DNS, Subnetting, Route tables, NAT, Firewalls, Database concepts, RDBMS, NoSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher headquartered in Redwood City, California. It has a diverse portfolio of games and experiences, with a global presence and a wide range of job opportunities.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/DevOps-Engineer/212695</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>977d4185-42b</externalid>
      <Title>Technical Support Engineer - Use Cases</Title>
      <Description><![CDATA[<p>We are seeking a Technical Support Engineer - Use Cases to join our Support team in France. This role is ideal for someone who excels at technical troubleshooting, incident investigation, and customer communication in a B2B environment.</p>
<p>As a key member of the support team, you will be responsible for handling escalated technical issues from our enterprise clients, reproducing complex problems, and collaborating with engineering, data, and product teams to ensure swift resolution.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Technical Support &amp; Incident Management: Handle escalated tickets from enterprise clients via Intercom, focusing on applications and use cases built by our Solutions team, and based on Mistral products (eg. Mistral Studio, Document AI).</li>
<li>Root Cause Analysis: Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (e.g., API errors, edge case failures, processing workflows issues).</li>
<li>Cross-Team Collaboration: Work closely with solutions and engineering teams to escalate, track, and resolve incidents efficiently.</li>
<li>Proactive Communication: Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.</li>
</ul>
<p>Knowledge Sharing &amp; Process Improvement:</p>
<ul>
<li>Documentation: Create and update technical FAQs as well as applications’ documentation and troubleshooting guides</li>
<li>Feedback Loop: Identify recurring pain points in customers’ applications and suggest improvements to product, documentation, or support workflows.</li>
</ul>
<p>Customer-Centric Approach:</p>
<ul>
<li>Empathy &amp; Ownership: Maintain a customer-first mindset, ensuring clients feel heard and supported, even in high-pressure situations.</li>
<li>Solution-Oriented: Proactively propose workarounds, fixes, or process optimizations to enhance the customer experience and reduce incident resolution time.</li>
</ul>
<p>Technical Expertise:</p>
<ul>
<li>Full-Stack Engineering: Experience with both frontend (React, NextJS, VueJS) and backend (Python, FastAPI) software engineering.</li>
<li>AI Engineering: Experience with AI and LLM applications.</li>
<li>(bonus) Kubernetes/Helm: Experience with deployment, scaling, and troubleshooting of applications in Kubernetes clusters using Helm charts.</li>
<li>Tooling: Proficiency in Intercom, monitoring tools, scripting (Bash/Python), and diagnostic utilities (logs, performance metrics).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Technical Support, Incident Management, Root Cause Analysis, Cross-Team Collaboration, Proactive Communication, Documentation, Feedback Loop, Empathy &amp; Ownership, Solution-Oriented, Full-Stack Engineering, AI Engineering, Kubernetes/Helm, Tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/a228ac73-62f1-4a2a-8afe-5070f445143f</Applyto>
      <Location>Marseille</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>90989a62-e70</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Role summary</p>
<p>Build, evolve, and operate backend services at scale for ZoomInfo. You&#39;ll work primarily with Node.js/TypeScript (NestJS preferred), design robust REST/GraphQL APIs, optimize MongoDB/Redis, and deploy on cloud (GCP preferred or AWS) with a strong focus on reliability, performance, security, and cost efficiency.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Design, implement, and own microservices and REST/GraphQL APIs in Node.js/TypeScript (NestJS preferred)</li>
<li>Translate product requirements into technical designs; break down work, estimate, and deliver incrementally</li>
<li>Model data and optimize queries in MongoDB; implement effective caching with Redis (TTL, eviction, hot-key mitigation)</li>
<li>Ship production-ready code with unit/integration tests; participate in on-call, incident response, and postmortems</li>
<li>Containerize and deploy via Docker/Kubernetes; automate builds and releases with CI/CD (blue/green or canary)</li>
<li>Instrument services for logs, metrics, and traces (p95/p99); continuously improve latency, reliability, and cost</li>
<li>Review code, document designs, and mentor SE II/III engineers; contribute to shared standards and best practices</li>
</ul>
<p>What you bring:</p>
<ul>
<li>7+ years of software engineering experience, including 3+ years building backend services in Node.js/TypeScript</li>
<li>Strong API fundamentals: versioning, pagination, authN/Z (OAuth/OIDC), and secure coding (OWASP)</li>
<li>Hands-on with NestJS/Express/Fastify; familiarity with microservices patterns and event-driven workflows</li>
<li>MongoDB expertise (schema design, indexing, basic sharding concepts) and Redis caching patterns</li>
<li>Cloud experience on GCP (preferred) or AWS; Docker; working knowledge of Kubernetes; CI/CD with GitHub Actions/Jenkins/GitLab</li>
<li>Observability skills: Datadog/OpenTelemetry/Prometheus/Grafana; confident debugging in production</li>
<li>Collaboration and communication skills; bias for clean, well-tested, and well-documented code</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Kafka or Pub/Sub; API Gateway/Ingress; feature flags; rate limiting and quotas</li>
<li>Terraform/Helm; security tooling (SonarQube), dependency hygiene, secret management</li>
<li>Performance profiling, load testing, and practical cost optimization</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, TypeScript, NestJS, MongoDB, Redis, Cloud, Docker, Kubernetes, CI/CD, API fundamentals, Microservices, Event-driven workflows, Observability, Collaboration, Communication, Kafka, Pub/Sub, API Gateway, Ingress, Feature flags, Rate limiting, Quotas, Terraform, Helm, Security tooling, Dependency hygiene, Secret management, Performance profiling, Load testing, Cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a leading provider of go-to-market intelligence platform that empowers businesses to grow faster with AI-ready insights, trusted data, and advanced automation.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8213981002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>aff12a89-c60</externalid>
      <Title>Member of Technical Staff - Data Infrastructure Manager</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>
<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>
<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>
<p>You’ll bring:</p>
<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>
<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>
<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>
<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>
<p>Qualifications:</p>
<p>Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>
<p>Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Software Engineering M5 – The typical base pay range for this role across the U.S. is USD $139,900 – $239,900</Salaryrange>
      <Skills>Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Cloud-Native Infrastructure, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, CI/CD Pipelines, Release Automation, Production Incident Response, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Modern Web Stacks, TypeScript, Node.js, React, PHP, Python, Bash, PowerShell, Kubernetes, Helm/Kustomize, Azure, AWS, GCP, Networking, Security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>88c171c8-d1c</externalid>
      <Title>Member of Technical Staff - Data Infrastructure Manager</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>
<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>
<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>
<p>You’ll bring:</p>
<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>
<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>
<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>
<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>
<p>Qualifications:</p>
<p>Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>
<p>Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Infrastructure, Distributed Systems, DevOps, SRE, Platform Engineering, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Databricks, Relational Databases, NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Data Infrastructure Patterns, Multi-Agent Systems, TypeScript, Node.js, React, PHP, Containerized Application Deployments, Modern Web Stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bd829e13-6ce</externalid>
      <Title>Member of Technical Staff - Data Infrastructure Manager</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>
<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>
<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>
<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>
<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>
<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>
<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>
<p>Qualifications Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>
<p>Preferred Qualifications Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>
<p>#MicrosoftAI #MAIDPS #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,000 per year</Salaryrange>
      <Skills>Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Containerized Application Deployments, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Modern Data Platforms, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Web Stacks, TypeScript, Node.js, React, PHP, Master’s Degree in Computer Science or related technical field, 10+ years of technical engineering experience, Bachelor’s Degree and 14+ years, Equivalent experience, 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering, 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments, 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize, Solid scripting and automation fluency in Python, Bash, or PowerShell, Proven track record managing CI/CD pipelines, release automation, and production incident response, Hands-on expertise with modern data platforms like Databricks, Proven experience with cloud-native infrastructure across Azure, AWS, or GCP, Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams, Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale, Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dacc9b06-4d8</externalid>
      <Title>Member of Technical Staff - Principal Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for a Member of Technical Staff – Principal Data Infrastructure Engineer. This role is a dynamic blend of Platform Engineering, DevOps/SRE, and Big Data Infrastructure Engineering, focused on enabling large-scale data and ML pipelines and intelligent systems. If you’ve architected big data platforms from the ground up and are eager to apply that expertise to consumer AI, we want to hear from you.</p>
<p>You’ll bring:</p>
<p>Deep technical expertise A passion for automation and observability Fluency in distributed systems Creativity to design scalable solutions And just as importantly: empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications. Champion DevOps and SRE best practices,automated deployments, service monitoring, and incident response. Build a self-service big data platform that empowers data and platform engineers and researchers. Develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM). Collaborate with Data Engineers, Data Scientists, AI Researchers, and Developers to deliver secure, seamless big data workflows. Lead technical design reviews and uphold a clean, secure, and well-documented codebase. Proactively identify and resolve bottlenecks in data pipelines and infrastructure. Optimize system performance across storage, compute, and analytics layers. Partner with Security teams to enhance system security (IAM, OAuth, Kerberos). Embody and promote Microsoft’s values: Respect, Integrity, Accountability, and Inclusion.</p>
<p>Qualifications:</p>
<p>Required Qualifications: Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</p>
<p>Preferred Qualifications: 4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 3+ years of hands-on experience managing and scaling distributed systems,from bare-metal to cloud-native environments. 2+ years deploying containerized applications using Kubernetes and Helm/Kustomize. Solid scripting and automation skills using Python, Bash, or PowerShell. Proven success in CI/CD pipeline management, release automation, and production troubleshooting. Experience working with Databricks for scalable data processing and analytics. Familiarity with security practices in infrastructure environments, including IAM, OAuth, and Kerberos administration. Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Hands-on expertise with modern data platforms like Databricks, including: Deep understanding of data storage and processing technologies: Relational &amp; NoSQL databases Key-value stores. Spark compute engines. Distributed file systems (e.g., HDFS, ADLS Gen2). Messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Capacity planning and incident management for large-scale big data systems. Solid collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP. Exposure to agentic workflows, deep learning, or AI frameworks. Practical experience integrating LLMs (e.g., GPT-based models) into daily workflows,automating documentation, code generation, reviews, and operational intelligence. Solid grasp of prompt engineering techniques to design, optimize, and evaluate interactions with LLMs. Demonstrated ability to troubleshoot and resolve complex performance and scalability issues across infrastructure layers. Excellent interpersonal and communication skills, with a solid passion for mentorship and continuous learning. Experience applying LLMs to DevOps workflows, enhancing incident response, and streamlining cross-functional collaboration is a solid advantage.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Big Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Databricks, CI/CD Pipelines, Infrastructure as Code, Bicep, Terraform, ARM, Python, Bash, PowerShell, Kubernetes, Helm, Kustomize, LLMs, GPT-based models, Prompt Engineering, Agentic Workflows, Deep Learning, AI Frameworks, Containerized Applications, Security Practices, IAM, OAuth, Kerberos Administration, Web Stacks, TypeScript, Node.js, React, PHP, Modern Data Platforms, Spark Compute Engines, Distributed File Systems, Messaging Systems, Capacity Planning, Incident Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-principal-data-infrastructure-engineer-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bee517db-e9c</externalid>
      <Title>DevOps Engineer (all genders)</Title>
      <Description><![CDATA[<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>
<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>
<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>
<li>Container Orchestration: Kubernetes with Helm</li>
<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>
<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>
<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>
<li>Scripting: Python, Go, Bash</li>
<li>Version Control: GitHub</li>
<li>Collaboration: Jira (Agile)</li>
<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a DevOps Engineer, you will be responsible for:</p>
<ul>
<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>
<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>
<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>
<li>Troubleshooting production issues and participating in capacity planning</li>
<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>
<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>
<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>
<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>
<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>
<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>
<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>
<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>
<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>
<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>
<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>
<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>
<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>
<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with Helm charts and Kubernetes package management</li>
<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>
<li>Experience with designing AWS services-based architectures is a plus</li>
<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>
<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>
<li>Exposure to cost optimization strategies for cloud infrastructure</li>
<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>
<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>
<li>Technology: Work in a modern tech environment</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud, Container Orchestration, Infrastructure as Code, Monitoring &amp; Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2595036</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a6557b2b-d24</externalid>
      <Title>Senior Platform Engineer II, Compute Services</Title>
      <Description><![CDATA[<p>We are seeking a Senior Platform Engineer to join our Kubernetes Infrastructure team. This role involves administering our critical multi-tenant Kubernetes platforms and collaborating with development teams to establish proper deployment architectures.</p>
<p>The ideal candidate will have a strong background in resilient kubernetes application architecture and deployment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Champion reliability initiatives for Kubernetes application deployments: Advocate for best practices to ensure high availability, scalability, and resilience of applications in Kubernetes, focusing on robust testing, secure pipelines, and efficient resource use.</li>
<li>Administer multi-tenant Kubernetes platforms: Manage complex multi-tenant Kubernetes clusters, configuring access, quotas, and security for isolation and optimal resource allocation while upholding SLAs.</li>
<li>Perform lifecycle and day 2 operations on clusters: Execute Kubernetes cluster lifecycle, including provisioning, patching, monitoring, backup, disaster recovery, and troubleshooting.</li>
<li>Deep dive into reliability issues: Conduct in-depth analysis and root cause identification for complex reliability incidents in Kubernetes, utilizing advanced debugging and monitoring tools to propose preventative measures.</li>
<li>Perform on-call duties: Respond to critical alerts and incidents outside business hours, providing timely resolution to minimize disruptions, collaborating with teams, and communicating clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s in CS, Engineering, or related field, or equivalent experience preferred.</li>
<li>CKA or similar certifications is highly desired.</li>
<li>5+ years administering multi-tenant SAAS Kubernetes (EKS, AKS, GKS).</li>
<li>Strong Gitops/Devops with Argocd or similar helm chart management.</li>
<li>Proven Docker and containerization experience.</li>
<li>Strong Linux OS experience.</li>
<li>Proficient in Go.</li>
<li>Excellent problem-solving, debugging, and analytical skills.</li>
<li>Strong communication and collaboration.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Benefits</strong></p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p><strong>Workplace</strong></p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Gitops/Devops, Argocd, Helm chart management, Docker, Containerization, Linux OS, Go, Problem-solving, Debugging, Analytical skills, Communication, Collaboration, CKA, Performance profiling, Optimization of distributed systems, Network protocols, Distributed consensus algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4607559006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>76d3f53b-3c6</externalid>
      <Title>Staff Software Engineer, Quality and Release Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Staff Software Engineer to join our Quality and Release Platform (QARP) team and lead the technical direction of the platforms that power how dbt Labs builds, tests, and ships software.</p>
<p>Our mission spans two critical areas: release engineering , making it easy for engineers to ship changes quickly, safely, and reliably , and code quality , building a platform that raises the bar for code quality across all of dbt Labs engineering.</p>
<p>In this role, you&#39;ll work with tools like Helm, ArgoCD, Terraform, Python, GitHub Actions, and Kargo to architect and scale our deployment systems, while also helping design and build the tooling, frameworks, and automation that enable engineering teams to consistently produce high-quality code.</p>
<p>This is a high-impact, staff-level role where you&#39;ll set architectural direction, mentor engineers, and drive initiatives that improve developer velocity, code quality, and reliability across the entire engineering organization.</p>
<p>Responsibilities</p>
<ul>
<li>Define and drive the technical strategy and architecture for our CI/CD platform, release management systems, and code quality platform.</li>
</ul>
<ul>
<li>Design and build tooling, frameworks, and automation that help engineering teams maintain and improve code quality across the organization.</li>
</ul>
<ul>
<li>Lead high-impact initiatives that improve automation, observability, and self-service capabilities for engineers across the organization.</li>
</ul>
<ul>
<li>Mentor and level up other engineers on the team, fostering a culture of technical excellence and continuous improvement.</li>
</ul>
<ul>
<li>Collaborate across teams and with engineering leadership to identify systemic challenges in our delivery and quality processes and architect solutions to address them.</li>
</ul>
<ul>
<li>Evolve our release architecture to support dbt Cloud&#39;s multi-cloud, cell-based infrastructure at scale.</li>
</ul>
<ul>
<li>Establish best practices and standards for build pipelines, release workflows, code quality, and infrastructure-as-code that are adopted across engineering.</li>
</ul>
<ul>
<li>Serve as a thought leader in engineering&#39;s internal AI strategy , evaluating AI-assisted development tools, defining adoption practices and guardrails, and enabling developers to use AI effectively across the org.</li>
</ul>
<p>Requirements</p>
<ul>
<li>8+ years of software engineering experience, with significant time in platform, infrastructure, release engineering, or developer tooling.</li>
</ul>
<ul>
<li>A track record of leading technical strategy and architecture for complex, production-scale CI/CD, code quality, or platform systems.</li>
</ul>
<ul>
<li>Deep experience with one or more of the following: Helm, ArgoCD, Terraform, GitHub Actions, or Kubernetes.</li>
</ul>
<ul>
<li>Strong background in Python, Go, or Rust for automation, platform tooling, or systems development.</li>
</ul>
<ul>
<li>Passion for code quality and experience building or improving tools, linters, static analysis, testing frameworks, or CI checks that help teams write better code.</li>
</ul>
<ul>
<li>Demonstrated ability to drive cross-team initiatives and influence engineering-wide practices and standards.</li>
</ul>
<ul>
<li>Excellent communication skills , able to translate complex technical concepts for diverse audiences and lead through influence.</li>
</ul>
<ul>
<li>Demonstrated interest or hands-on experience with AI-assisted development tools and practices, with a perspective on how AI can improve engineering productivity and code quality.</li>
</ul>
<ul>
<li>Experience working asynchronously as part of a fully remote, distributed team.</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Experience with Kargo or similar progressive delivery systems.</li>
</ul>
<ul>
<li>Hands-on experience with multi-cloud architectures (AWS, GCP, Azure).</li>
</ul>
<ul>
<li>Experience building code quality platforms, static analysis tooling, or testing infrastructure at scale.</li>
</ul>
<ul>
<li>Experience defining and rolling out engineering-wide code quality standards or best practices.</li>
</ul>
<ul>
<li>A track record of improving developer productivity or release safety across a large engineering organization.</li>
</ul>
<ul>
<li>Experience mentoring engineers and shaping team culture in a staff or principal-level role.</li>
</ul>
<ul>
<li>Track record of evaluating, championing, and rolling out AI developer tools (e.g., Copilot, Cursor, Claude Code) within an engineering organization.</li>
</ul>
<ul>
<li>Experience defining guidelines, guardrails, or best practices for AI-assisted development.</li>
</ul>
<p>Compensation &amp; Benefits</p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $207,000 - $251,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $230,000 - $279,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<p>Our Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews - Technical (3 rounds, 60 Mins each)</li>
</ul>
<ul>
<li>Values Interview (30 Mins)</li>
</ul>
<p>#LI_RC1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Helm, ArgoCD, Terraform, Python, GitHub Actions, Kargo, Kubernetes, multi-cloud architectures, code quality platforms, static analysis tooling, testing infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, used by over 90,000 teams every week, with annual recurring revenue exceeding $100 million.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4666468005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1903386-87b</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>
<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>
<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>
<p>Automate operations and engineering.</p>
<p>Focus on automation so we can spend energy where it matters.</p>
<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>
<p>Deep proficiency with coding languages such as Golang or Python.</p>
<p>Deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>
<p>Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>
<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>
<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>
<p>Production experience with database software such as PostgreSQL.</p>
<p>Experience with GitOps tooling such as Flux or Argo.</p>
<p>Experience with CI/CD such as GitHub Actions.</p>
<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>
<p>Compensation includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower team members to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4535898008</Applyto>
      <Location>Germany (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26212e9e-5a8</externalid>
      <Title>Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>
<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>
<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>
<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>
<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>
<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>
<li>Deep proficiency with coding languages such as Golang or Python.</li>
<li>Deep familiarity with container-related security best practices.</li>
<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>
<li>Experience with GPU-enabled clusters is a bonus.</li>
<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>
<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>
<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>
<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>
<li>Production experience with database software such as PostgreSQL.</li>
<li>Experience with GitOps tooling such as Flux or Argo.</li>
<li>Experience with CI/CD such as GitHub Actions.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>
<li>Flexible vacation time to promote a healthy work-life blend.</li>
<li>Paid parental leave to support you and your family.</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5113847008</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac95264-313</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>
<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4802840008</Applyto>
      <Location>Romania (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eef55d3d-bf0</externalid>
      <Title>Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Job Title: Cloud Deployment Engineer, Space</p>
<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>ABOUT THE JOB</strong></p>
<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>
<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>
<p><strong>REQUIRED QUALIFICATIONS</strong></p>
<ul>
<li>5+ years of working experience in DevOps or SRE type roles</li>
<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>
<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>
<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>
<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>
<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>
<li>Strong problem-solving skills and the ability to work well under pressure</li>
<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>
<li>Experience deploying complex and scalable infrastructure solutions</li>
<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p><strong>PREFERRED QUALIFICATIONS</strong></p>
<ul>
<li>Extensive expertise in Kubernetes and Helm</li>
<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>
<li>Cisco Certified Network Associate (CCNA)</li>
<li>Experience with government Cyber certification processes</li>
<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>
<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>
<li>Military service background (particularly with Space experience)</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>
<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<ul>
<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>
<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>
<li>Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development.</li>
<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0787994a-b99</externalid>
      <Title>Senior Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Anduril Industries is seeking a Senior Cloud Deployment Engineer to join their Space team. The successful candidate will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. They will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of the duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. The engineer will also be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>The role requires 8+ years of working experience in DevOps or SRE type roles, with strong proficiency in utilizing cloud services like AWS, Azure, or Google Cloud Platform. Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc) and containerization technologies such as Docker and orchestration tools like Kubernetes and Helm is also required.</p>
<p>The salary range for this role is $166,000-$220,000 USD per year, with highly competitive equity grants included in the majority of full-time offers. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, and family planning and parenting support.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>AWS, Azure, Google Cloud Platform, IaC, Kubernetes, Helm, Docker, Terraform, Cloudformation, Puppet, Ansible</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that aims to transform U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5032429007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d856957a-ee4</externalid>
      <Title>Orbital Software Engineer, Space</Title>
      <Description><![CDATA[<p>As an Orbital Software Engineer on Space, you will contribute to the architecture and deployment of software solutions that support specific customer missions for on-orbit spacecraft operations and mission management.</p>
<p>This role involves providing guidance and oversight to a team developing modular capabilities to support the DoD and IC customers across the space domain. You will work on architecture of an orbital software system in coordination with the Anduril Lattice software platform team, algorithms, techniques, and coding development to support orbital systems and their interfacing with multi-domain systems.</p>
<p>The role requires integration with legacy systems that have been part of our nation&#39;s critical defense for decades as well as new space systems being added to the cache of orbital and ground-based capabilities. You will also be responsible for the interfaces to multi-modal payload platforms, bus platforms, and networking solutions in proliferated satellite constellations.</p>
<p>We work with mission partners and operators to deploy reliable and robust capabilities on operationally-relevant fielding timelines to meet complex challenges across the DOD and IC.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing to software solutions that are deployed to customers.</li>
<li>Contributing to development of on-orbit software capabilities to enable delivery of end-to-end mission systems.</li>
<li>Integrating with legacy systems to unlock 21st-century capabilities.</li>
<li>Writing code to improve products and scale the mission capability to different users and customers.</li>
<li>Collaborating across multiple teams to plan, build, and test complex functionality.</li>
<li>Creating and analyzing metrics that are leveraged for debugging and monitoring.</li>
<li>Triage issues, root cause failures, and coordinate next-steps.</li>
<li>Partnering with end-users to turn needs into features while balancing user experience with engineering constraints.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics.</li>
<li>Ability to quickly understand and navigate complex systems and detailed requirements.</li>
<li>Capable of solving complex technical problems with little oversight.</li>
<li>Clear communication and organizational skills including documentation and training material.</li>
<li>Ideally 3+ years professional experience working with a variety of programming languages such as Python, C++, Rust, or Go.</li>
<li>Experience with spacecraft software systems and spacecraft operations.</li>
<li>Experience with satellite mission autonomy to include fault isolation and recovery systems.</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Advanced expertise with Python, Go, Rust, or C++.</li>
<li>Experience with deployment tooling like Kubernetes, OpenShift, or Helm.</li>
<li>A desire to work on critical software in the space domain.</li>
<li>Experience with OMS/UCI standards and modular software services development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Python, C++, Rust, Go, Spacecraft software systems, Satellite mission autonomy, Fault isolation and recovery systems, Kubernetes, OpenShift, Helm, OMS/UCI standards, Modular software services development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4872480007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8ec69b82-1cb</externalid>
      <Title>Ground Software Engineer, Space</Title>
      <Description><![CDATA[<p>As a Mission Software Engineer on Space Ground Software, you will own the architecture and deployment of software solutions that support customer missions for space operations and mission management.</p>
<p>This role involves developing modular capabilities to support the DoD and IC customers across the space domain. You will work with mission partners and operators to deploy reliable and robust capabilities on operationally-relevant fielding timelines to meet complex challenges across the DOD and IC.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the software solutions that are deployed to customers</li>
<li>Developing Space C2 and Mission Management capabilities to enable delivery of end-to-end mission systems</li>
<li>Integrating with legacy systems to unlock 21st-century capabilities</li>
<li>Writing code to improve products and scale the mission capability to different users</li>
<li>Collaborating across multiple teams to plan, build, and test complex functionality</li>
<li>Creating and analyzing metrics that are leveraged for debugging and monitoring</li>
<li>Triage issues, root cause failures, and coordinate next-steps</li>
<li>Partnering with end-users to turn needs into features while balancing user experience with engineering constraints</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics</li>
<li>Ability to quickly understand and navigate complex systems and detailed requirements</li>
<li>Capable of solving complex technical problems with little oversight</li>
<li>Clear communication and organizational skills including documentation and training material</li>
<li>3+ years working with a variety of programming languages such as Java, Python, C++, Rust, Go, JavaScript, etc.</li>
<li>Experience with Space C2 software systems and spacecraft operations</li>
<li>A desire to work on critical software that has a real-world impact</li>
<li>Currently possesses and is able to maintain an active U.S. Top Secret SCI security clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with Go, Rust, or C++</li>
<li>Experience with deployment tooling such as Kubernetes, OpenShift, or Helm</li>
<li>Previous exposure to space ground hardware like sensors, radars, telescopes</li>
<li>Previous exposure to space-related simulation and hardware integration</li>
<li>A desire to work on critical software in the space domain</li>
<li>Experience with existing space software standards and modular software services development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$168,000-$220,000 USD</Salaryrange>
      <Skills>Java, Python, C++, Rust, Go, JavaScript, Space C2 software systems, Spacecraft operations, Kubernetes, OpenShift, Helm, Space ground hardware, Space-related simulation and hardware integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4767772007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2cfdef10-0fe</externalid>
      <Title>Ground Software Engineer, Space</Title>
      <Description><![CDATA[<p>As a Ground Software Engineer on the Space team, you will own the architecture and deployment of software solutions that support customer missions for space operations and mission management. You will develop modular capabilities to support the DoD and IC customers across the space domain. This will involve the architecture of the Space C2 system based on top of the Anduril Lattice software platform, to include infrastructure, algorithms, and integrations to support both orbital and ground systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the software solutions that are deployed to customers</li>
<li>Developing Space C2 and Mission Management capabilities to enable delivery of end-to-end mission systems</li>
<li>Integrating with legacy systems to unlock 21st-century capabilities</li>
<li>Writing code to improve products and scale the mission capability to different users</li>
<li>Collaborating across multiple teams to plan, build, and test complex functionality</li>
<li>Creating and analyzing metrics that are leveraged for debugging and monitoring</li>
<li>Triage issues, root cause failures, and coordinate next-steps</li>
<li>Partnering with end-users to turn needs into features while balancing user experience with engineering constraints</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics</li>
<li>Ability to quickly understand and navigate complex systems and detailed requirements</li>
<li>Capable of solving complex technical problems with little oversight</li>
<li>Clear communication and organizational skills including documentation and training material</li>
<li>3+ years working with a variety of programming languages such as Java, Python, C++, Rust, Go, JavaScript, etc.</li>
<li>Experience with Space C2 software systems and spacecraft operations</li>
<li>A desire to work on critical software that has a real-world impact</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with Go, Rust, or C++</li>
<li>Experience with deployment tooling such as Kubernetes, OpenShift, or Helm</li>
<li>Previous exposure to space ground hardware like sensors, radars, telescopes</li>
<li>Previous exposure to space-related simulation and hardware integration</li>
<li>A desire to work on critical software in the space domain</li>
<li>Experience with existing space software standards and modular software services development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$168,000-$220,000 USD</Salaryrange>
      <Skills>Java, Python, C++, Rust, Go, JavaScript, Space C2 software systems, Spacecraft operations, Kubernetes, OpenShift, Helm, Space ground hardware, Space-related simulation and hardware integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4497427007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fca5411d-4fb</externalid>
      <Title>Staff Site Reliability Engineer - Kubernetes</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Workforce Identity Cloud</p>
<p>Okta Workforce Identity Cloud (WIC) provides easy, secure access for your workforce so you can focus on other strategic priorities,like reducing costs, and doing more for your customers.</p>
<p>If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, “If you have to do something more than once, automate it” and who can rapidly self-educate on new concepts and tools.</p>
<p><strong>Position Overview:</strong></p>
<p>The Site Reliability Engineer (SRE) will play a key role in building and managing Kubernetes platforms that support cloud-native applications and services. This position focuses on architecting and managing reliable, scalable, and secure Kubernetes-based platforms on AWS, ensuring high availability and performance while optimising costs and automation. The ideal candidate will have hands-on experience with AWS infrastructure, Kubernetes platform creation, Helm charts, Karpenter scaling, and Istio service mesh.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Kubernetes Platform Creation: Design, implement, and maintain highly available, scalable, and fault-tolerant Kubernetes platforms. Ensure clusters are optimised for production workloads, providing high resilience and operational efficiency.</li>
</ul>
<ul>
<li>AWS Infrastructure Management: Build, manage, and optimise AWS cloud infrastructure, including EKS, ECS, S3, VPCs, RDS, IAM, and more. Implement best practices for cost management, scaling, and security within AWS.</li>
</ul>
<ul>
<li>Helm Management: Utilise Helm to automate and streamline the deployment of applications and services to Kubernetes clusters. Create, maintain, and manage Helm charts for production-ready deployments.</li>
</ul>
<ul>
<li>Karpenter Implementation: Implement and manage Karpenter to dynamically scale Kubernetes clusters in response to workload demands.</li>
</ul>
<ul>
<li>Istio Service Mesh Management: Configure and manage Istio to provide service-to-service communication, security, and observability within the Kubernetes clusters. Enable fine-grained traffic management, service discovery, and policy enforcement.</li>
</ul>
<ul>
<li>Platform Automation &amp; Scaling: Automate the deployment, scaling, and management of infrastructure and applications. Work with CI/CD pipelines to ensure a seamless flow from development to production with minimal downtime.</li>
</ul>
<ul>
<li>Incident Management &amp; Troubleshooting: Respond to incidents, troubleshoot, and resolve system issues related to performance, availability, and security in a timely and effective manner.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Design and implement secure cloud infrastructure with appropriate access controls, network security, and compliance frameworks.</li>
</ul>
<ul>
<li>Documentation &amp; Knowledge Sharing: Create and maintain detailed documentation for Kubernetes platform setup, operational procedures, and best practices. Promote knowledge sharing across teams.</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>4+ years of experience with Kubernetes/Helm;</li>
</ul>
<ul>
<li>4+ years of Experience with Terraform.</li>
</ul>
<ul>
<li>5+ years of Experience with AWS</li>
</ul>
<ul>
<li>Experience with multi-region cloud environments.</li>
</ul>
<ul>
<li>Proven experience with AWS (EC2, RDS, S3, CloudFormation, IAM, etc.) and solid understanding of cloud-native architectures.</li>
</ul>
<ul>
<li>Strong expertise in Kubernetes platform creation, management, and optimisation (e.g., setting up highly available clusters, networking, and storage).</li>
</ul>
<ul>
<li>Hands-on experience with Helm for Kubernetes application deployment and management.</li>
</ul>
<ul>
<li>Practical experience with Karpenter for dynamic scaling of Kubernetes clusters and optimising resource usage.</li>
</ul>
<ul>
<li>Expertise in managing and securing Istio for service mesh, including traffic management, security, and observability features.</li>
</ul>
<ul>
<li>Proficiency in CI/CD pipelines and automation tools (e.g., Jenkins, GitLab, CircleCI, Terraform, Ansible, Spinnaker).</li>
</ul>
<ul>
<li>Strong scripting and automation skills in Python, Bash, or Go for infrastructure management and platform automation.</li>
</ul>
<ul>
<li>Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, CloudWatch, and ELK Stack.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Understanding of security best practices for cloud platforms and Kubernetes (e.g., role-based access control (RBAC), encryption, and compliance frameworks).</li>
</ul>
<ul>
<li>Familiarity with Docker and containerization principles.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent professional experience).</li>
</ul>
<ul>
<li>Certifications (Preferred): CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), or AWS Certified DevOps Engineer are highly desirable.</li>
</ul>
<p>Additional requirements:</p>
<ul>
<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>
</ul>
<ul>
<li>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</li>
</ul>
<p>#LI-Hybrid</p>
<p>#LI-LSS1</p>
<p>requisition ID- (P16373_3396241)</p>
<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$267,000 USD</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$174,000-$214,000 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$174,000-$214,000 USD</Salaryrange>
      <Skills>Kubernetes, Helm, Terraform, AWS, Cloud-native architectures, Kubernetes platform creation, Kubernetes management, Kubernetes optimisation, Helm for Kubernetes application deployment, Karpenter for dynamic scaling, Istio for service mesh, CI/CD pipelines, Automation tools, Python, Bash, Go, Monitoring, Logging, Alerting, Security best practices for cloud platforms and Kubernetes, Docker and containerization principles, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified DevOps Engineer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743339</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6984004d-b3f</externalid>
      <Title>Intermediate Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab with assurance by building and supporting the deployment tooling, infrastructure, and automation behind how GitLab is installed, upgraded, and operated.</p>
<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (GET), and the GitLab Operator to improve reliability, security, and scalability in production-grade environments. This is a hands-on role where you&#39;ll partner with Distribution Engineers, Site Reliability Engineers, Release Managers, Security, and Development teams to make self-managed GitLab easier to use across a wide range of platforms.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolve Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support new GitLab features and architectures</li>
</ul>
<ul>
<li>Improve installation, upgrade, and validation automation for large-scale self-managed GitLab deployments</li>
</ul>
<p>Maintain and improve the Omnibus GitLab package so GitLab components work reliably in self-managed deployments.</p>
<p>Develop and support GitLab Helm Charts for scalable, production-ready Kubernetes deployments.</p>
<p>Enhance the GitLab Environment Toolkit (GET) and validated reference architectures used by enterprise and internal users.</p>
<p>Support and extend the GitLab Operator for Kubernetes-native lifecycle management of GitLab installations.</p>
<p>Improve the installation, upgrade, and day-to-day operating experience across supported self-managed platforms.</p>
<p>Collaborate with Security to address vulnerabilities and strengthen secure defaults and configurations across the deployment stack.</p>
<p>Build and maintain automation and continuous integration and continuous deployment pipelines that validate deployment tooling across Omnibus, Charts, GET, and the Operator.</p>
<p>Partner with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features and keep user-facing documentation accurate and useful.</p>
<p>Experience building and maintaining backend services in production environments, especially in deployment, infrastructure, or platform tooling.</p>
<p>Practical knowledge of Kubernetes operations, including authoring and maintaining Helm charts.</p>
<p>Proficiency with Ruby and Go, along with scripting skills to automate workflows and tooling.</p>
<p>Familiarity with Terraform and infrastructure as code practices across cloud and on-premises environments.</p>
<p>Hands-on experience with relational databases, especially PostgreSQL, including performance and reliability considerations.</p>
<p>Understanding of secure, scalable, and supportable deployment practices, along with observability tools such as Prometheus and Grafana.</p>
<p>Experience collaborating in large codebases and distributed teams, including writing clear user-facing documentation and implementation guides.</p>
<p>Openness to learning new technologies and applying transferable skills across different parts of the GitLab deployment stack.</p>
<p>The Upgrades team is part of GitLab Delivery and delivers GitLab to self-managed users through supported, validated deployment tooling. The team maintains Omnibus GitLab, Helm Charts, the GitLab Operator, and the GitLab Environment Toolkit (GET) to help self-managed users deploy GitLab securely and reliably across diverse environments. You&#39;ll join a distributed group of backend engineers that works asynchronously across time zones and collaborates closely with Site Reliability Engineering, Release, Security, and Development teams. The team is focused on improving installation and upgrade workflows, strengthening automation and security, and helping self-managed customers run GitLab successfully at any scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Go, Kubernetes, Helm charts, Terraform, infrastructure as code, PostgreSQL, relational databases, observability tools, Prometheus, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by over 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463951002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44ff0179-993</externalid>
      <Title>Senior Backend Engineer (RoR), SSCS: Pipeline Security</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the Pipeline Security team, you&#39;ll take technical ownership of GitLab&#39;s native Secrets Manager, a production system built on OpenBao that helps secure sensitive credentials across GitLab CI/CD pipelines.</p>
<p>You&#39;ll work at the intersection of backend engineering and infrastructure, shaping architecture in Ruby on Rails and Go, guiding decisions around role-based access control (RBAC), GraphQL APIs, and Kubernetes deployment configuration.</p>
<p>In your first year, you&#39;ll help move Secrets Manager toward general availability, establish technical patterns the team can build on, and represent the team&#39;s point of view in cross-functional discussions.</p>
<p>You&#39;ll have end-to-end ownership, from design through production operations, with room to identify what should be built next and improve how the team delivers secure, reliable features.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and maintain secure, readable backend code primarily in Ruby on Rails, with some development in Go for targeted components.</li>
</ul>
<ul>
<li>Design backend architecture for complex security features, including secrets access control, pipeline security enforcement, and OpenBao integration.</li>
</ul>
<ul>
<li>Lead the development of role-based access control models, GraphQL APIs, and supporting application patterns for features owned by the team.</li>
</ul>
<ul>
<li>Own features end to end, from technical design and implementation through deployment, validation, and production support.</li>
</ul>
<ul>
<li>Collaborate with Product, security partners, and other engineering teams to document tradeoffs, align on direction, and deliver iteratively in a distributed environment.</li>
</ul>
<ul>
<li>Improve code quality, maintainability, security, and performance through code review, design iteration, and internal standards for a high-scale web environment.</li>
</ul>
<ul>
<li>Build and maintain Helm charts, including configuration, tuning, documentation, and automated testing for Kubernetes-based deployments.</li>
</ul>
<ul>
<li>Validate features in Kubernetes environments, including GitLab Cloud Native and Cloud Native Hybrid deployments, using GitLab testing and performance testing frameworks.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building and maintaining backend features with a focus on secure design, data handling, and production reliability.</li>
</ul>
<ul>
<li>Ability to write production-quality code in Ruby on Rails, including use of framework security patterns and review for common application risks.</li>
</ul>
<ul>
<li>Working knowledge of CI/CD concepts and the ways pipelines can be misconfigured, abused, or expose sensitive data.</li>
</ul>
<ul>
<li>Familiarity with secrets management approaches and security practices for handling credentials in CI environments; experience with tools such as HashiCorp Vault or similar systems is helpful.</li>
</ul>
<ul>
<li>Comfort collaborating across Product and engineering teams in an asynchronous, distributed environment and communicating technical tradeoffs clearly in writing.</li>
</ul>
<ul>
<li>Ability to review merge requests with a security-first mindset and improve solutions through feedback and iteration.</li>
</ul>
<ul>
<li>Experience debugging production issues, including investigation of security-related behavior and proposing practical fixes.</li>
</ul>
<ul>
<li>Openness to learning adjacent domains and tools, including Go, container security, and software supply chain security; we welcome transferable experience from different technical backgrounds.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Pipeline Security team builds features that make GitLab CI pipelines more secure and trustworthy for teams running sensitive workloads. We own key parts of pipeline security within GitLab&#39;s CI/CD experience, with our current focus on native secrets management for CI pipelines and Supply-chain Levels for Software Artifacts (SLSA) Level 3 capabilities to strengthen software supply chain security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$117,600-$252,000 USD</Salaryrange>
      <Skills>Ruby on Rails, Go, OpenBao, Role-Based Access Control (RBAC), GraphQL APIs, Kubernetes deployment configuration, Helm charts, CI/CD concepts, Secrets management approaches, Security practices for handling credentials in CI environments, Container security, Software supply chain security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8432221002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Israel; Remote, Netherlands; Remote, United Kingdom; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc3534c1-6a6</externalid>
      <Title>Engineering Manager, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As the Engineering Manager, GitLab Delivery - Operate, you&#39;ll guide a globally distributed team focused on making it easier for customers to deploy, upgrade, and run GitLab reliably in their own infrastructure.</p>
<p>You&#39;ll help shape the systems and tooling that support environments ranging from single-node virtual machines to large Kubernetes clusters, with a focus on reliability, operational simplicity, upgrade velocity, and zero-downtime capabilities across GitLab.com, GitLab Dedicated, and self-managed deployments.</p>
<p>In this role, you&#39;ll partner closely with a Product Manager and work across Infrastructure Platforms to connect customer needs and business goals with practical engineering choices.</p>
<p>This is a hands-on leadership opportunity for someone who wants to support a high-performing team while influencing how GitLab is delivered at scale.</p>
<p>In your first year, you&#39;ll help the team deliver better deployment and upgrade experiences, guide technical direction in areas like Kubernetes Operators, Helm charts, and cloud-native deployment architectures, and contribute to incident management to help support the availability of GitLab.com.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Guide a globally distributed engineering team and create an environment where team members can do strong work and grow in an all-remote, asynchronous setting.</li>
</ul>
<ul>
<li>Hire onboard, and develop team members who align with GitLab&#39;s values and contribute to an outcome-focused engineering organization.</li>
</ul>
<ul>
<li>Manage and improve agile, asynchronous workflows so the team can deliver deployment tooling and services iteratively and reliably.</li>
</ul>
<ul>
<li>Partner with Product Management and engineering peers across Infrastructure Platforms to align team priorities with customer needs and business goals.</li>
</ul>
<ul>
<li>Own the reliability, upgrade experience, and operational simplicity of GitLab deployments across self-managed environments, GitLab.com, and GitLab Dedicated.</li>
</ul>
<ul>
<li>Improve deployment patterns, observability, zero-downtime capabilities, and upgrade orchestration for customers running GitLab on their own infrastructure.</li>
</ul>
<ul>
<li>Apply technical judgment in areas such as Kubernetes Operators, Helm charts, and stateful application delivery to guide choices and unblock the team.</li>
</ul>
<ul>
<li>Participate in incident management and work with reliability and development teams to help maintain the availability of GitLab.com.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience guiding deployment tooling, platform engineering, or site reliability engineering teams that operate at meaningful scale.</li>
</ul>
<ul>
<li>Strong technical knowledge of Kubernetes Operators, Helm charts for stateful applications, and upgrade orchestration patterns.</li>
</ul>
<ul>
<li>Familiarity with cloud-native deployment architectures, database lifecycle management, schema migrations, and zero-downtime upgrade strategies.</li>
</ul>
<ul>
<li>Experience working on enterprise-scale or consumer-scale platforms, ideally in a product-focused software environment.</li>
</ul>
<ul>
<li>Ability to investigate complex deployment and operational issues and explain tradeoffs clearly to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience building high-performing, distributed teams and supporting team members in an asynchronous, all-remote environment.</li>
</ul>
<ul>
<li>Effective cross-functional skills across functions such as Infrastructure, Support, and Customer Success to improve customer outcomes.</li>
</ul>
<ul>
<li>Openness to diverse paths into the role, including transferable skills, formal computer science education, or equivalent practical experience, along with interest in open source and developer tools.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The GitLab Delivery - Operate team is part of the Infrastructure Platforms department, which enables how GitLab operates, scales, and is delivered across GitLab.com, GitLab Dedicated, and self-managed offerings.</p>
<p>We are a globally distributed team that owns deployment tooling and operational patterns to help customers run GitLab reliably on infrastructure ranging from virtual machines to Kubernetes clusters.</p>
<p>We work asynchronously across regions and work closely with other Infrastructure teams, along with Support and Customer Success, to turn lessons from operating GitLab at scale into product and tooling improvements that benefit customers across all deployment models.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes Operators, Helm charts, cloud-native deployment architectures, database lifecycle management, schema migrations, zero-downtime upgrade strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting them to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463917002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ec3e47f7-26c</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Infrastructure Engineering Automation team. As a key member of our team, you will lead the development of robust tooling and AI-powered solutions underpinned by a centralized source of truth for all infrastructure data.</p>
<p>Your primary focus will be on two core pillars: Orchestration &amp; Patterns, and Infrastructure Intelligence. In the former, you will build the platform that allows teams to productionize their own automations using durable execution frameworks and standardized IaC patterns. In the latter, you will create, source, and enrich critical infrastructure and organizational data and make it accessible and actionable for both humans and AI agents.</p>
<p>To succeed in this role, you will need to design and develop high-performance internal tools and APIs using Go (Golang) to manage infrastructure metadata and lifecycle. You will also design complex, long-running workflows using durable execution frameworks (like Temporal) to orchestrate tasks across Git, Cloud providers, and CI/CD pipelines. Additionally, you will develop and implement Model Context Protocol (MCP) servers and Agentic AI workflows to automate the creation, upgrading, and auditing of infrastructure configurations.</p>
<p>You will collaborate with Infrastructure, Security, and Development teams to design &#39;Infrastructure Intelligence&#39; tools that provide deep insights into asset ownership and EOL lifecycles. Your expertise in Go (Golang), Temporal, MCP, and A2A frameworks will be crucial in driving the success of this project.</p>
<p>If you&#39;re a motivated and knowledgeable software engineer with a passion for building infrastructure tools, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go (Golang), Temporal, MCP (Model Context Protocol), A2A (Agent-to-Agent) frameworks, Infrastructure as Code (IaC), Cloud providers (GCP and AWS), CI/CD tools (GitHub Actions, Helm, ArgoCD)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8400168002</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9d27e558-af6</externalid>
      <Title>Senior Site Reliability Engineer</Title>
      <Description><![CDATA[<p><strong>Role</strong></p>
<p>We are building a global operating network that finally enables supply-chain companies to collaborate within one platform. Our workflow engine empowers non-technical industry experts to model their complex manufacturing and operational processes. Our forms engine enables unprecedented data exchange between companies. And our upcoming AI engine can generate entire new processes and summarize the complex goings-on across thousands of workflows, identifying inefficiencies and driving optimization as companies react to a constantly-shifting global landscape.</p>
<p>As an SRE you will have the opportunity to shape our developer platform, work directly with customers, and architect solutions that balance the rigorous security and reliability requirements of global enterprises with the speed and flexibility of a rapidly growing series A organization.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute to SRE-owned portions of application codebases related to infrastructure clients, SaaS clients, observability, and reliability patterns.</li>
<li>Contribute to the developer platform interfaces to enable a growing number of engineers, microservices, and environments (helm charts, CI platform, and deploy processes).</li>
<li>Advocate for new tools and processes that will help Regrello grow.</li>
<li>Take part in on-call rotations.</li>
<li>Collaborate with cross-functional teams, including Development, QA, Product Management, to ensure successful releases.</li>
</ul>
<p><strong>Stack</strong></p>
<ul>
<li>GCP: GKE, CloudRun, Memorystore, CloudSQL, BigQuery</li>
<li>Kubernetes: helm, helmfile</li>
<li>Automation: Terraform, shell</li>
<li>Queue: Temporal, Machinery, Celery</li>
<li>Launchdarkly</li>
<li>Otel / Prometheus / Grafana / Splunk</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>4-8 years of experience in site reliability, software engineering, or a related role.</li>
<li>Strong understanding of software development lifecycle (SDLC) and Agile methodologies.</li>
<li>Experience with CI/CD tools such as Github Actions, GitLab CI, or CircleCI.</li>
<li>Proficiency in scripting languages for automation tasks.</li>
<li>Fluency with cloud platforms (AWS, Azure, GCP), kubernetes, feature flags, and modern backend technologies (experience with Go is strongly preferred, with the ability to quickly learn new technologies as needed).</li>
<li>A builder’s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.)</li>
<li>Excellent problem-solving and communications skills, and attention to detail, with the ability to work effectively in a remote team environment.</li>
</ul>
<p><strong>Culture and Compensation</strong></p>
<p>We are a customer-obsessed, product-driven company that is building a flexible, hybrid/remote culture to enable the brightest minds in the industry. We are particularly interested in candidates based in our hubs of Seattle, San Francisco, and New York, but we will consider candidates who live anywhere in the US, Canada, or Mexico. We have industry-leading compensation packages, including equity and health benefits. We are willing to sponsor US work authorization if needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-200,000 per year</Salaryrange>
      <Skills>Bachelor’s degree in Computer Science or a related field, 4-8 years of experience in site reliability, software engineering, or a related role, Strong understanding of software development lifecycle (SDLC) and Agile methodologies, Experience with CI/CD tools such as Github Actions, GitLab CI, or CircleCI, Proficiency in scripting languages for automation tasks, Fluency with cloud platforms (AWS, Azure, GCP), kubernetes, feature flags, and modern backend technologies (experience with Go is strongly preferred, with the ability to quickly learn new technologies as needed), A builder’s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.), Excellent problem-solving and communications skills, and attention to detail, with the ability to work effectively in a remote team environment, GCP: GKE, CloudRun, Memorystore, CloudSQL, BigQuery, Kubernetes: helm, helmfile, Automation: Terraform, shell, Queue: Temporal, Machinery, Celery, Launchdarkly, Otel / Prometheus / Grafana / Splunk</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Regrello</Employername>
      <Employerlogo>https://logos.yubhub.co/regrello.com.png</Employerlogo>
      <Employerdescription>Regrello is a 40-person startup reimagining automation in supply chains, with a $220-billion market opportunity.</Employerdescription>
      <Employerwebsite>https://regrello.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/regrello/e4222908-c38b-4c4c-9067-9f66d94c0be2</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>82cdd12d-60d</externalid>
      <Title>Software Engineer, Deployment Infrastructure</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are seeking experienced backend software engineers to join our Deployment team. You&#39;ll help deploy and integrate our products (models, APIs, AI Studio...) across multiple infrastructure configurations, from leading cloud service providers to self-hosted (private cloud and on-premises) solutions. You&#39;ll work closely with the research, product, solution architect and program management teams to serve our frontier models to customers wherever they use our technology.</p>
<p>Responsibilities</p>
<ul>
<li>New releases – you will ensure fast and reliable launch of new products (from models to APIs) to customers</li>
<li>Build and test infrastructure – you will work to improve and extend the infrastructure needed to package, deploy and integrate our core technology within first-party systems and third-party platforms</li>
<li>Safety – you will help solve the unique challenges that come with maintaining AI safety on third-party platforms</li>
<li>Observability and Monitoring – you will collaborate closely with both internal and external stakeholders to ensure our services achieve high availability and deliver state-of-the-art performance for our users</li>
<li>Build automation to increase deployment performance (velocity, scalability)</li>
<li>Foster architecture improvements to make our products deployable on all configurations (including on-premise)</li>
<li>Drive cross-functional feature improvements with other product engineering teams (Le Chat, API/SDK, Mistral Code...)</li>
<li>Contribute to key technology and architecture trade-offs to break our deployment stack down into small, maintainable and testable pieces</li>
</ul>
<p>About you</p>
<ul>
<li>5+ years of relevant professional work experience</li>
<li>Master’s degree in Computer Science, Information Technology or a related field</li>
<li>Excellent proficiency in backend software development (Python, Golang)</li>
<li>Strong proficiency in infrastructure management (Docker, CI/CD, K8s, Helm, Terraform...)</li>
<li>Good knowledge of cloud ecosystems and understanding of the challenges of deploying LLM in multiple environments (public cloud, private cloud, on-premises)</li>
<li>Autonomous and self-starter profile</li>
<li>Ability to communicate with influence</li>
</ul>
<p>Hiring Process</p>
<ul>
<li>Introduction call - 30 min</li>
<li>Hiring Manager Interview - 30 min</li>
<li>Live-coding Interview (Python) - 45 min</li>
<li>System Design Interview - 45 min</li>
<li>Optional: Deep Dive Interview (Staff/Lead specific) - 60 min</li>
<li>Culture-fit discussion - 30 min</li>
<li>References</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend software development, Python, Golang, infrastructure management, Docker, CI/CD, K8s, Helm, Terraform, cloud ecosystems, LLM deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and markets AI-powered products and solutions. It has a diverse workforce and operates globally.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/31364497-4081-454a-b50c-12d15daf6876</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6f25b435-69f</externalid>
      <Title>Technical Support Engineer – On-Premise</Title>
      <Description><![CDATA[<p>We are seeking a Technical Support Engineer - On-Premise Infrastructure to join our Support team in France. This role is ideal for someone who excels at technical troubleshooting, incident investigation, and customer communication in a B2B environment.</p>
<p>As a key member of the support team, you will be responsible for handling escalated technical issues from on-premise enterprise clients, reproducing complex problems, and collaborating with engineering, data, and product teams to ensure swift resolution. You will report directly to the Head of Support, and play a critical role in maintaining customer satisfaction and improving our support operations.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Investigation: Handle escalated tickets from enterprise clients via Intercom, focusing on on-premise infrastructure and AI-related issues (e.g., deployment, performance, integration, security).</li>
<li>Root Cause Analysis: Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (systems, networks, storage, GPU clusters, AI models).</li>
<li>Cross-Team Collaboration: Work closely with engineering, and deployment teams to escalate, track, and resolve incidents efficiently.</li>
<li>Proactive Communication: Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.</li>
</ul>
<p>Knowledge Sharing &amp; Process Improvement:</p>
<ul>
<li>Documentation: Create and update technical FAQs, troubleshooting guides, and internal knowledge base articles to empower self-serve/L1 team and reduce recurrence of issues.</li>
<li>Feedback Loop: Identify recurring pain points in on-premise deployments and suggest improvements to product, documentation, or support workflows.</li>
</ul>
<p>Customer-Centric Approach:</p>
<ul>
<li>Empathy &amp; Ownership: Maintain a customer-first mindset, ensuring clients feel heard and supported, even in high-pressure situations.</li>
<li>Solution-Oriented: Proactively propose workarounds, fixes, or process optimizations to enhance the customer experience and reduce incident resolution time.</li>
</ul>
<p>Technical Expertise:</p>
<ul>
<li>On-Premise &amp; Cloud Environments: Deep understanding of Linux/Windows servers, networking, virtualization, storage, security (firewalls, RGPD compliance), and cloud providers (AWS, GCP, Azure).</li>
<li>Kubernetes/Helm: Experience with deployment, scaling, and troubleshooting of applications in Kubernetes clusters using Helm charts.</li>
<li>Terraform: Familiarity with Infrastructure as Code (IaC) for managing cloud resources is a strong plus.</li>
<li>AI Infrastructure: Knowledge of AI/ML pipelines, LLM/RAG deployments, GPU acceleration, and data storage solutions for enterprise clients.</li>
<li>Tooling: Proficiency in Intercom, monitoring tools, scripting (Bash/Python), and diagnostic utilities (logs, performance metrics).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux/Windows servers, Networking, Virtualization, Storage, Security, Kubernetes/Helm, Terraform, AI/ML pipelines, LLM/RAG deployments, GPU acceleration, Data storage solutions, Intercom, Monitoring tools, Scripting (Bash/Python), Diagnostic utilities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/f00a13aa-61f1-4c56-993c-20846adc2b15</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>153549f2-a4d</externalid>
      <Title>Technical Support Engineer - Use Cases</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are seeking a Technical Support Engineer - Use Cases to join our Support team in France. This role is ideal for someone who excels at technical troubleshooting, incident investigation, and customer communication in a B2B environment.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Technical Support &amp; Incident Management</strong></p>
<p>Handle escalated tickets from enterprise clients via Intercom, focusing on applications and use cases built by our Solutions team, and based on Mistral products (eg. Mistral Studio, Document AI). Root Cause Analysis: Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (e.g., API errors, edge case failures, processing workflows issues). Cross-Team Collaboration: Work closely with solutions and engineering teams to escalate, track, and resolve incidents efficiently. Proactive Communication: Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.</p>
<p><strong>Knowledge Sharing &amp; Process Improvement</strong></p>
<p>Documentation: Create and update technical FAQs as well as applications’ documentation and troubleshooting guides Feedback Loop: Identify recurring pain points in customers’ applications and suggest improvements to product, documentation, or support workflows.</p>
<p><strong>Customer-Centric Approach</strong></p>
<p>Empathy &amp; Ownership: Maintain a customer-first mindset, ensuring clients feel heard and supported, even in high-pressure situations. Solution-Oriented: Proactively propose workarounds, fixes, or process optimizations to enhance the customer experience and reduce incident resolution time.</p>
<p><strong>Technical Expertise</strong></p>
<p>Full-Stack Engineering: Experience with both frontend (React, NextJS, VueJS) and backend (Python, FastAPI) software engineering. AI Engineering: Experience with AI and LLM applications. Kubernetes/Helm: Experience with deployment, scaling, and troubleshooting of applications in Kubernetes clusters using Helm charts. Tooling: Proficiency in Intercom, monitoring tools, scripting (Bash/Python), and diagnostic utilities (logs, performance metrics).</p>
<p><strong>Who You Are</strong></p>
<p>Required Experience: 3+ years in technical support, software engineering, or LLMs, with a focus on end-to-end systems. Technical Skills: Hands-on experience with troubleshooting complex technical issues in enterprise environments. Knowledge of AI/ML workflows, data pipelines, and software engineering best practices. Familiarity with ticketing systems (Intercom), RGPD compliance, and security best practices. Soft Skills: Exceptional problem-solving and analytical skills. Strong written and verbal communication in French and English (additional languages are a bonus). Ability to explain technical concepts clearly to non-technical stakeholders. Mindset: Customer-obsessed, with a passion for delivering high-quality support. Collaborative, able to work effectively in a distributed, fast-paced team. Curious and adaptable, with a willingness to learn and master new technologies.</p>
<p><strong>Why Join Mistral AI?</strong></p>
<p>Directly contribute to the success of enterprise AI solutions and shape the future of AI support. Opportunities for career advancement in support leadership, technical specialization, or customer success. Work with cutting-edge AI technology in a dynamic, mission-driven company. Join a passionate, diverse, and low-ego team that values collaboration and continuous learning. Hybrid flexibility (Paris or Marseille office) with a focus on work-life balance and professional development.</p>
<p><strong>What We Offer</strong></p>
<p>Competitive cash salary and equity Daily lunch vouchers: Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company Sport: Enjoy discounted access to gyms and fitness studios through our Wellpass partnership Transportation: Monthly contribution to a mobility pass via Betterway Health: Full health insurance for you and your family Parental: Generous parental leave policy Visa sponsorship Coaching: we offer BetterUp coaching on a voluntary basis</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Full-Stack Engineering, AI Engineering, Kubernetes/Helm, Intercom, Monitoring Tools, Scripting (Bash/Python), Diagnostic Utilities (Logs, Performance Metrics)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides high-performance, open-source AI models, products, and solutions for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/a228ac73-62f1-4a2a-8afe-5070f445143f</Applyto>
      <Location>Marseille</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4b414123-045</externalid>
      <Title>Product Security Engineer II</Title>
      <Description><![CDATA[<p>We are seeking a Product Security Engineer II to join our growing security team. This role will be critical in ensuring the security of our products across the entire software development lifecycle (SDLC) and provide support on different security initiatives.</p>
<p>You will work closely with engineering, product, and operations teams to embed security best practices from design through to deployment.</p>
<p>Key responsibilities include:</p>
<p>Supporting the execution of a comprehensive product security strategy that aligns with the company&#39;s goals and risk appetite.
Working hands-on across code, infrastructure, and CI/CD to create agents, services, and pipelines that detect, prevent, and remediate risks leveraging AI where it adds value.
Designing, building, and operating security automation for the SDLC (code scanning, dependency risk management, secrets detection, policy-as-code) integrated into CI/CD.
Performing manual design and implementation reviews of Greenlight products and services from a security perspective.
Establishing and enforcing secure development standards (i.e., API security, security patterns, IaC, etc.) and best practices across the organization.
Serving as a subject matter expert on the practical security of our AI and LLM ecosystem. Leading threat modeling exercises for novel AI systems applying advanced security and privacy best practices.
Leveraging automations and tools to continuously test, fuzz, and validate products and platform components for security issues.
Performing penetration testing and retesting to validate fixes.
Responsible for triaging findings from security researchers and leading incident response for PSIRT.
On-call support for incident response and leading product-related security events and vulnerabilities.
Fostering a culture of security awareness and ownership across the Engineering and Product organizations.
Staying current with the latest security threats, vulnerabilities, and industry best practices to continuously evolve our security controls and processes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, Java/Kotlin, React, Redux, Swift, SwiftUI, AWS, MySQL, DynamoDB, Redis, Kubernetes, Ambassador, Helm, Rancher, SAST, DAST, IAST, Penetration testing, Fuzzing, Scripting, Automation, Exploit writing, Cloud security principles, Security assessment of IoT hardware/firmware, Contribution to security community, Experience at Fintech or similar regulated companies, Startup Agility</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company serving over 6 million parents and kids with its award-winning banking app for families.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/6daa8340-f262-454c-be7d-e3adc813fe0e</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8e582153-6af</externalid>
      <Title>Senior DevOps Lead - Cloud &amp; Autonomous System</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>
<p>We are a small company with under 100 employees, operating with the energy of a startup. However, we&#39;re also publicly traded, which means our employees get access to the liquidity of our publicly-traded equity.</p>
<p>As a Senior DevOps Lead at Cyngn, you will play a vital role in architecting and managing infrastructure across cloud and autonomous vehicle systems. This position combines traditional cloud DevOps leadership with specialized expertise in robotics and autonomous systems infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and architect cloud and vehicle infrastructure initiatives across AWS and ROS/Linux environments</li>
<li>Design and implement scalable solutions for both cloud services and autonomous vehicle systems</li>
<li>Establish and maintain DevOps best practices, CI/CD pipelines, and infrastructure as code</li>
<li>Drive observability, monitoring, and incident response strategies</li>
<li>Optimize performance and cost efficiency of cloud and edge computing resources</li>
<li>Mentor team members and foster a developer-friendly environment</li>
<li>Manage on-call rotations and incident response processes</li>
<li>Architect solutions for processing and storing large-scale vehicle telemetry data</li>
<li>Lead security initiatives and compliance efforts across infrastructure</li>
</ul>
<p>Requirements</p>
<ul>
<li>10+ years of relevant DevOps/Infrastructure experience</li>
<li>Proven track record as a technical lead in platform or infrastructure teams</li>
<li>Advanced expertise in AWS services, infrastructure as code (Terraform), and Kubernetes</li>
<li>Strong experience with service mesh (Istio) and Helm/Kustomize</li>
<li>Deep understanding of ROS/ROS2 and Linux kernel configurations</li>
<li>Experience with GPU configurations and ML infrastructure</li>
<li>Expertise in ARM and NVIDIA CUDA platform configurations</li>
<li>Strong programming skills in Python and shell scripting</li>
<li>Experience with infrastructure automation (Ansible)</li>
<li>Expertise in CI/CD tools (Jenkins, GitHub Actions)</li>
<li>Strong system architecture and design skills</li>
<li>Excellence in technical documentation</li>
<li>Outstanding problem-solving abilities</li>
<li>Strong leadership and mentoring capabilities</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with autonomous vehicle systems</li>
<li>Track record of optimizing GPU-based ML infrastructure</li>
<li>Experience with large-scale IoT deployments</li>
<li>Contributions to open-source projects</li>
<li>Experience with real-time systems and low-latency requirements</li>
<li>Expertise in security implementations including SSO, IdP, and AWS Cognito</li>
<li>Experience with JFrog artifactory and container registry management</li>
<li>Proficiency in AWS IoT Greengrass</li>
<li>Experience with container resource management on edge devices</li>
<li>Understanding of CPU affinity and priority scheduling</li>
<li>Track record of implementing cost optimization strategies</li>
<li>Experience with scaling systems both horizontally and vertically</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</li>
<li>Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)</li>
<li>Company 401(k)</li>
<li>Commuter Benefits</li>
<li>Flexible vacation policy</li>
<li>Sabbatical leave opportunity after five years with the company</li>
<li>Paid Parental Leave</li>
<li>Daily lunches for in-office employees</li>
<li>Monthly meal and tech allowances for remote employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,000-225,000 per year</Salaryrange>
      <Skills>AWS services, infrastructure as code (Terraform), Kubernetes, service mesh (Istio), Helm/Kustomize, ROS/ROS2, Linux kernel configurations, GPU configurations, ML infrastructure, ARM, NVIDIA CUDA platform configurations, Python, shell scripting, infrastructure automation (Ansible), CI/CD tools (Jenkins, GitHub Actions), system architecture and design skills, technical documentation, problem-solving abilities, leadership and mentoring capabilities, autonomous vehicle systems, optimizing GPU-based ML infrastructure, large-scale IoT deployments, open-source projects, real-time systems and low-latency requirements, security implementations including SSO, IdP, and AWS Cognito, JFrog artifactory and container registry management, AWS IoT Greengrass, container resource management on edge devices, CPU affinity and priority scheduling, cost optimization strategies, scaling systems both horizontally and vertically</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/1c31b7d8-cf85-472f-9358-1e10189cf815</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>84b5f2ae-e50</externalid>
      <Title>Member of Technical Staff, Foundation (Backend Engineer)</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>As a Member of Technical Staff on the Domain Engineering team, you are responsible for ensuring a robust technology stack, enabling our company to build scalable, efficient, and maintainable products. Allowing our product teams to focus on developing customer-focused features.</p>
<p>You are a strong individual contributor and you have the ability to significantly contribute to and execute complex engineering projects, enabled with appropriate coding and testing. You can understand the “why” in order to connect dependencies to the “bigger picture” and Anchorage mission and product roadmap.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate with other engineering teams to identify areas for improvements across our engineering stack.</li>
<li>Previous experience in establishing shared libraries across teams, with a focus on standardization, code quality, and reduced duplication.</li>
<li>Proven experience with application observability projects that involved setting up performance metrics, log aggregation, tracing, and alerting systems.</li>
</ul>
<p><strong>Complexity and Impact of Work</strong></p>
<ul>
<li>Find the right balance between progress (i.e. shipping quickly) and perfection (i.e. measuring twice).</li>
<li>Foster an efficient deterministic testing culture, with an emphasis on minimizing tech debt and bureaucracy.</li>
<li>Ship code that will impact the whole organization.</li>
</ul>
<p><strong>Organizational Knowledge</strong></p>
<ul>
<li>Collaborate across multiple teams, especially on integration, standardization, and shared resources.</li>
<li>Influence others by engaging in in-depth technical design discussions and demonstrating best practices through technical leadership by example.</li>
<li>Make a meaningful impact across the entire engineering organization, extending influence beyond the immediate team.</li>
</ul>
<p><strong>Communication and Influence</strong></p>
<ul>
<li>Communicate technical concepts and solutions effectively to non-technical stakeholders.</li>
<li>Build strong relationships with colleagues to drive collaboration and innovation.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>You are passionate about constantly seeking opportunities to refine and enhance existing systems and processes.</li>
<li>Driven by a passion for being a force multiplier and influential technical leader in a dynamic, fast-paced startup environment.</li>
<li>Have expert coding skills in Golang.</li>
<li>Experienced in cross-functional projects, collaborating effectively with your team and adjacent teams to tackle complex challenges.</li>
<li>Have excellent soft skills, including, the ability to adapt communication for both internal and external stakeholders in an effective manner, bridging gaps with empathy and proactive communication.</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Infrastructure-as-code; Terraform, Gitops, Helm</li>
<li>Google Cloud Platform &amp; Security</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Application Observability, Shared Libraries, Code Quality, Reduced Duplication, Infrastructure-as-code, Terraform, Gitops, Helm, Google Cloud Platform, Security</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure. It was founded in 2017 and has a Series D valuation over $3 billion.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/96ff9ab4-93c0-412e-a0ac-2c5ed4e076ed</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8af6c2b6-03c</externalid>
      <Title>Member of Technical Staff, Domain (Backend Engineer)</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. As a Member of Technical Staff on the Domain Engineering team, you are responsible for ensuring a robust technology stack, enabling our company to build scalable, efficient, and maintainable products. Allowing our product teams to focus on developing customer-focused features.</p>
<p>You are a strong individual contributor and have the ability to significantly contribute to and execute complex engineering projects, enabled with appropriate coding and testing. You can understand the “why” in order to connect dependencies to the “bigger picture” and Anchorage mission and product roadmap.</p>
<p><strong>Technical Skills</strong></p>
<ul>
<li>Collaborate with other engineering teams to identify areas for improvements across our engineering stack.</li>
<li>Previous experience in establishing shared libraries across teams, with a focus on standardization, code quality, and reduced duplication.</li>
<li>Proven experience with application observability projects that involved setting up performance metrics, log aggregation, tracing, and alerting systems.</li>
</ul>
<p><strong>Complexity and Impact of Work</strong></p>
<ul>
<li>Find the right balance between progress (i.e. shipping quickly) and perfection (i.e. measuring twice).</li>
<li>Foster an efficient deterministic testing culture, with an emphasis on minimizing tech debt and bureaucracy.</li>
<li>Ship code that will impact the whole organization.</li>
</ul>
<p><strong>Organizational Knowledge</strong></p>
<ul>
<li>Collaborate across multiple teams, especially on integration, standardization, and shared resources.</li>
<li>Influence others by engaging in in-depth technical design discussions and demonstrating best practices through technical leadership by example.</li>
<li>Make a meaningful impact across the entire engineering organization, extending influence beyond the immediate team.</li>
</ul>
<p><strong>Communication and Influence</strong></p>
<ul>
<li>Communicate technical concepts and solutions effectively to non-technical stakeholders.</li>
<li>Build strong relationships with colleagues to drive collaboration and innovation.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>Are passionate about constantly seeking opportunities to refine and enhance existing systems and processes.</li>
<li>Driven by a passion for being a force multiplier and influential technical leader in a dynamic, fast-paced startup environment.</li>
<li>Have expert coding skills in Golang.</li>
<li>Experienced in cross-functional projects, collaborating effectively with your team and adjacent teams to tackle complex challenges.</li>
<li>Have excellent soft skills, including the ability to adapt communication for both internal and external stakeholders in an effective manner, bridging gaps with empathy and proactive communication.</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>You have experience with infrastructure-as-code, Terraform, Gitops, Helm.</li>
<li>You have experience with Google Cloud Platform &amp; Security.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Application Observability, Performance Metrics, Log Aggregation, Tracing, Alerting Systems, Infrastructure-as-code, Terraform, Gitops, Helm, Google Cloud Platform &amp; Security</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure. It was founded in 2017 and has a Series D valuation over $3 billion.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/5898d01d-a4a5-44e5-8d20-2f6710dc2035</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7bc95dd3-97d</externalid>
      <Title>Senior Engineer</Title>
      <Description><![CDATA[<p>At Compound Labs, we&#39;re on a mission to change the way people use their cryptocurrencies. The majority of cryptocurrencies sit idle on exchanges and in wallets, without yielding interest. We&#39;re developing protocols, governance systems, developer tools, and financial products to enable the future of finance.</p>
<p>As a Senior Engineer at Compound, you will help lead our team in building open, transparent, frictionless financial protocols. You will be given the opportunity to design, build, test, and launch products that make DeFi more secure, capital-efficient, accessible, and useful.</p>
<p>We work in Solidity, Typescript, Rust, Elixir, Elm, and are forever learning and optimizing everything we make in the name of correct coding. At Compound, you will be part of a cutting-edge team building high-caliber technology that has a massive impact on the future of decentralized finance.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build and test smart contracts in Solidity for EVM-based blockchains</li>
<li>Ensure the correctness of financial algorithms</li>
<li>Build back-end web services that interact with blockchains</li>
<li>Keep a pulse on the DeFi ecosystem and identify improvements or areas of growth for the protocol</li>
<li>Enthusiastically collaborate with a small team, owning and planning projects for long-term impact</li>
<li>Teach and mentor other engineers</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Familiarity with the current DeFi ecosystem</li>
<li>Expertise building web-services for highly-trafficked web services</li>
<li>Exceptional judgment, strategic thinking and creative problem-solving skills with a strong analytical mindset</li>
<li>5+ years of professional engineering experience</li>
<li>BA or BS degree in Computer Science or a related technical field, or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Solidity or Yul</li>
<li>Layer 2 blockchains (e.g. Arbitrum, Optimism) and cross-chain bridges</li>
<li>Foundry or Hardhat</li>
<li>Typescript, Elixir, Phoenix</li>
<li>Elm or React</li>
</ul>
<p><strong>Benefits and Perks:</strong></p>
<ul>
<li>Competitive compensation</li>
<li>Stock options in Compound Labs</li>
<li>3+ weeks of vacation</li>
<li>Paid parental leave</li>
<li>Full medical, dental, and vision insurance</li>
<li>401(k) plan</li>
<li>Company on-sites across the world</li>
<li>Remote available with the option to work in our downtown SF office (metro accessible with transportation stipends)</li>
</ul>
<p><strong>Additional Information:</strong></p>
<p>We are an equal opportunity employer and value diversity at our company. We welcome qualified candidates of all races, creeds, genders, age, veteran status and sexuality to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Solidity, Typescript, Rust, Elixir, Elm, DeFi ecosystem, Financial algorithms, Web services, Analytical mindset, Exceptional judgment, Strategic thinking, Creative problem-solving, Layer 2 blockchains, Cross-chain bridges, Foundry, Hardhat, Phoenix, React</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Compound Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/compound.finance.png</Employerlogo>
      <Employerdescription>Compound Labs develops protocols, governance systems, developer tools, and financial products to enable the future of finance.</Employerdescription>
      <Employerwebsite>https://compound.finance/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/compound-2/ec06d806-10fd-4f56-9538-fcd8a7b455a0</Applyto>
      <Location>Continental US</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c1ce7a22-6fd</externalid>
      <Title>Software Engineer, Deployment Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking experienced backend software engineers to join our Deployment team. As a Software Engineer in the Deployment team, you will be responsible for ensuring fast and reliable launch of new products to customers, building and testing infrastructure, solving safety challenges, collaborating with internal and external stakeholders to ensure high availability and performance, and fostering architecture improvements.</p>
<p>About you:</p>
<ul>
<li>5+ years of relevant professional work experience</li>
<li>Master&#39;s degree in Computer Science, Information Technology or a related field</li>
<li>Excellent proficiency in backend software development (Python, Golang)</li>
<li>Strong proficiency in infrastructure management (Docker, CI/CD, K8s, Helm, Terraform...)</li>
<li>Good knowledge of cloud ecosystems and understanding of the challenges of deploying LLM in multiple environments (public cloud, private cloud, on-premises)</li>
<li>Autonomous and self-starter profile</li>
<li>Ability to communicate with influence</li>
</ul>
<p>Hiring Process:</p>
<ul>
<li>Introduction call - 30 min</li>
<li>Hiring Manager Interview - 30 min</li>
<li>Live-coding Interview (Python) - 45 min</li>
<li>System Design Interview - 45 min</li>
<li>Optional: Deep Dive Interview (Staff/Lead specific) - 60 min</li>
<li>Culture-fit discussion - 30 min</li>
<li>References</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend software development, infrastructure management, cloud ecosystems, LLM deployment, Python, Golang, Docker, CI/CD, K8s, Helm, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops and integrates AI technology into daily working life, offering a comprehensive AI platform for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/31364497-4081-454a-b50c-12d15daf6876</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>3e77a678-cf0</externalid>
      <Title>Technical Support Engineer – On-Premise</Title>
      <Description><![CDATA[<p>We are seeking a Technical Support Engineer - On-Premise Infrastructure to join our Support team in France. This role is ideal for someone who excels at technical troubleshooting, incident investigation, and customer communication in a B2B environment.</p>
<p>As a key member of the support team, you will be responsible for handling escalated technical issues from on-premise enterprise clients, reproducing complex problems, and collaborating with engineering, data, and product teams to ensure swift resolution. You will report directly to the Head of Support, and play a critical role in maintaining customer satisfaction and improving our support operations.</p>
<p>This is a unique opportunity to work at the intersection of AI infrastructure, customer success, and technical problem-solving.</p>
<p>Key Responsibilities:</p>
<p>Technical Support &amp; Incident Management</p>
<p>• Frontline Investigation: Handle escalated tickets from enterprise clients via Intercom, focusing on on-premise infrastructure and AI-related issues (e.g., deployment, performance, integration, security).</p>
<p>• Root Cause Analysis: Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (systems, networks, storage, GPU clusters, AI models).</p>
<p>• Cross-Team Collaboration: Work closely with engineering, and deployment teams to escalate, track, and resolve incidents efficiently.</p>
<p>• Proactive Communication: Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.</p>
<p>Knowledge Sharing &amp; Process Improvement</p>
<p>• Documentation: Create and update technical FAQs, troubleshooting guides, and internal knowledge base articles to empower self-serve/L1 team and reduce recurrence of issues.</p>
<p>• Feedback Loop: Identify recurring pain points in on-premise deployments and suggest improvements to product, documentation, or support workflows.</p>
<p>Customer-Centric Approach</p>
<p>• Empathy &amp; Ownership: Maintain a customer-first mindset, ensuring clients feel heard and supported, even in high-pressure situations.</p>
<p>• Solution-Oriented: Proactively propose workarounds, fixes, or process optimizations to enhance the customer experience and reduce incident resolution time.</p>
<p>Technical Expertise</p>
<p>• On-Premise &amp; Cloud Environments: Deep understanding of Linux/Windows servers, networking, virtualization, storage, security (firewalls, RGPD compliance), and cloud providers (AWS, GCP, Azure).</p>
<p>• Kubernetes/Helm: Experience with deployment, scaling, and troubleshooting of applications in Kubernetes clusters using Helm charts.</p>
<p>• Terraform: Familiarity with Infrastructure as Code (IaC) for managing cloud resources is a strong plus.</p>
<p>• AI Infrastructure: Knowledge of AI/ML pipelines, LLM/RAG deployments, GPU acceleration, and data storage solutions for enterprise clients.</p>
<p>• Tooling: Proficiency in Intercom, monitoring tools, scripting (Bash/Python), and diagnostic utilities (logs, performance metrics).</p>
<p>Who you are:</p>
<p>Required Experience: 3+ years in technical support, systems administration, or DevOps, with a focus on on-premise or hybrid infrastructures.</p>
<p>Technical Skills:</p>
<p>• Hands-on experience with troubleshooting complex technical issues in enterprise environments.</p>
<p>• Knowledge of AI/ML workflows, data pipelines, or high-performance computing (a strong plus).</p>
<p>• Familiarity with ticketing systems (Intercom), RGPD compliance, and security best practices.</p>
<p>Soft Skills:</p>
<p>• Exceptional problem-solving and analytical skills.</p>
<p>• Strong written and verbal communication in French and English (additional languages are a bonus).</p>
<p>• Ability to explain technical concepts clearly to non-technical stakeholders.</p>
<p>Mindset:</p>
<p>• Customer-obsessed, with a passion for delivering high-quality support.</p>
<p>• Collaborative, able to work effectively in a distributed, fast-paced team.</p>
<p>• Curious and adaptable, with a willingness to learn and master new technologies.</p>
<p>Why Join Mistral AI?</p>
<p>• Impact: Directly contribute to the success of enterprise AI deployments and shape the future of on-premise support.</p>
<p>• Growth: Opportunities for career advancement in support leadership, technical specialization, or customer success.</p>
<p>• Innovation: Work with cutting-edge AI technology in a dynamic, mission-driven company.</p>
<p>• Team: Join a passionate, diverse, and low-ego team that values collaboration and continuous learning.</p>
<p>• Work Environment: Hybrid flexibility (Paris office) with a focus on work-life balance and professional development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux/Windows servers, Networking, Virtualization, Storage, Security, Cloud providers, Kubernetes/Helm, Terraform, AI/ML pipelines, LLM/RAG deployments, GPU acceleration, Data storage solutions, Intercom, Monitoring tools, Scripting, Diagnostic utilities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence technology for various industries.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/f00a13aa-61f1-4c56-993c-20846adc2b15</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>58f5680d-d38</externalid>
      <Title>Délégué Technique (F/H)</Title>
      <Description><![CDATA[<p><strong>DELEGUE(E) TECHNIQUE (F/H)</strong></p>
<p>Le Mans</p>
<p><strong>Les missions du poste</strong></p>
<p>Rattaché au Responsable Technique, le <strong>Délégué Technique</strong> rejoindra l’équipe permanente de l’ACO, en charge de l’élaboration et de l’application de la réglementation technique.</p>
<p>Il interviendra principalement sur les championnats <strong>ALMS</strong> et <strong>ELMS</strong>, ainsi que sur leurs championnats support, et représentera l’ACO auprès des constructeurs, concurrents et fédérations partenaires.</p>
<p><strong>Vous serez amené à :</strong></p>
<p><strong>Avant les événements :</strong></p>
<p>•Gérer en autonomie les aspects techniques des épreuves organisées par l’ACO</p>
<p>•Former les équipes de contrôle en lien avec la Direction</p>
<p>•Préparer les notes et bulletins techniques</p>
<p>•Organiser les contrôles techniques et assurer le lien avec les officiels</p>
<p><strong>Pendant les événements :</strong></p>
<p>•Assurer le rôle de Délégué Technique, conformément au Code Sportif International</p>
<p>•Réaliser les contrôles techniques (initiaux, inopinés, finaux)</p>
<p>•Produire rapports et relevés dans les règles de l’art</p>
<p><strong>Après les événements :</strong></p>
<p>•Participer aux débriefings opérationnels et techniques</p>
<p>•Produire un reporting détaillé</p>
<p>•Contribuer à l’évolution des règlements techniques et sportifs (LMP2, LMP3, LMGT3)</p>
<p><strong>En transversal :</strong></p>
<p>•Coordonner l’activité des pôles spécialisés (Opérations, Performance, Électronique) sur le périmètre championnat</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>13ème mois</Salaryrange>
      <Skills>Diplôme d’ingénieur généraliste (ex. mécanique), Première expérience réussie dans le sport automobile (ingénieur, exploitation…), Résistance au stress, diplomatie, rigueur et discrétion, Aisance avec le travail à distance, Maîtrise du français et de l’anglais, à l’écrit comme à l’oral, Maîtrise du Pack Office et des outils collaboratifs (MS Teams, Google Drive, SharePoint…), Disponibilité pour environ 10 déplacements annuels, incluant week-ends et jours fériés, Maîtrise de la réglementation technique, Connaissance des championnats ALMS et ELMS, Expérience dans la coordination d’événements sportifs</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Automobile Club de l&apos;Ouest</Employername>
      <Employerlogo>https://logos.yubhub.co/recrutement.lemans.org.png</Employerlogo>
      <Employerdescription>The Automobile Club de l&apos;Ouest (ACO) is a French motorsport organisation that creates and organises the 24 Hours of Le Mans, a legendary endurance racing event held since 1923 on the Circuit de la Sarthe.</Employerdescription>
      <Employerwebsite>https://recrutement.lemans.org</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://recrutement.lemans.org/offer/11284-NDQ0NTMtcm9MNDNw</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f7c94e9c-5ab</externalid>
      <Title>Member of Technical Staff, Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Software Engineer to join their MAI SuperIntelligence team in Zürich, Switzerland. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff, Software Engineer, you will design and build core platform services for scalable training and evaluation, including cluster orchestration, job scheduling, data and compute pipelines, and artifact management. You will standardize containerized workflows by maintaining Docker images, CI/CD, and runtime configurations; advocate for best practices in security, reproducibility, and cost efficiency. You will implement end-to-end observability and operations through metrics, tracing, logging, dashboard development, monitoring, and automated alerts for model training and platform health (using Prometheus, Grafana, OpenTelemetry). You will architect and operate services on Azure cloud platforms, managing infrastructure-as-code (Terraform/Helm), secrets, networking, and storage. You will enhance developer experience by creating tools, CLIs, and portals that simplify job submission, metrics analysis, and experiment management for generalist software engineering and research teams.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build core platform services for scalable training and evaluation, including cluster orchestration, job scheduling, data and compute pipelines, and artifact management.</li>
<li>Standardize containerized workflows by maintaining Docker images, CI/CD, and runtime configurations; advocate for best practices in security, reproducibility, and cost efficiency.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Strong software engineering background building reliable, scalable production systems (Python preferred).</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Hands-on experience supporting large-scale ML / LLM training, evaluation, or experimentation infrastructure.</li>
<li>Operating GPU-heavy workloads in cloud environments using Docker and Kubernetes (scheduling, utilization, isolation).</li>
<li>Designing and running data / compute pipelines and orchestration (e.g., Airflow, Argo) with object storage (Azure Blob / S3).</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Building secure, reproducible platforms using CI/CD, infrastructure-as-code (Terraform, Helm), container security, and secrets management.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunity to work with a talented team of engineers and researchers.</li>
<li>Access to cutting-edge technology and resources.</li>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>Strong software engineering background, Python, Docker, Kubernetes, Airflow, Argo, Azure Blob, S3, CI/CD, Terraform, Helm, Container security, Secrets management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that help businesses and individuals solve complex problems. Microsoft AI is committed to making a positive impact on society through the responsible development and use of AI.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-engineer-mai-superintelligence-team/</Applyto>
      <Location>Zürich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>91c82cff-0d2</externalid>
      <Title>Kubernetes Cluster Administrator (DevOps)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and self-driven Kubernetes Administrator to manage and support our on-premises Kubernetes clusters. The ideal candidate will have deep experience in on-prem Kubernetes administration, automation tools like ArgoCD and Ansible and be proficient in scripting languages (e.g., Bash, Python, or Shell).</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will play a key role in designing, maintaining, and scaling our Kubernetes-based infrastructure, supporting mission-critical applications, and automating operational workflows in a secure and resilient environment.</p>
<ul>
<li>Administer and maintain on-prem Kubernetes clusters, including installation, upgrades, patching, and monitoring.</li>
<li>Automate infrastructure provisioning and configuration management using ArgoCD.</li>
<li>Troubleshoot system, container, and network-level issues across distributed environments.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Proven hands-on experience managing on-premises Kubernetes environments (kubeadm, containerd, etc.).</li>
<li>Strong experience with ArgoCD for GitOps workflows and continuous delivery.</li>
<li>Strong knowledge of observability and monitoring stacks, including Grafana Alloy, Prometheus, metrics pipelines, and alerting.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>permanent</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, ArgoCD, Ansible, Bash, Python, Shell, Helm, GitOps, Container security</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>AVL-AST D.O.O.</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL is one of the world’s leading mobility technology companies for development, simulation and testing in the automotive industry, and beyond. We provide concepts, solutions and methodologies in fields like vehicle development and integration, e-mobility, automated and connected mobility (ADAS/AD), and software for a greener, safer, better world of mobility.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Zagreb-Kubernetes-Cluster-Administrator-%28DevOps%29/1365028433/</Applyto>
      <Location>Zagreb</Location>
      <Country></Country>
      <Postedate>2026-02-18</Postedate>
    </job>
    <job>
      <externalid>28fd37f4-a07</externalid>
      <Title>Devops Developer</Title>
      <Description><![CDATA[<p>Join us for an opportunity to work with the best game development teams in the world. We are looking for a Devops Engineer to join the tools development and automation team supporting BioWare, Motive, Maxis, Full Circle.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>This DevOps Developer role in the Software Quality organization works with Quality Assurance and Game Development teams to create tools and technical strategies. Our goal is to improve automation infrastructure and increase efficiencies in the Game Development and QA processes.</p>
<ul>
<li>Operate and maintain tools, ensuring exceptional uptime, secure environments.</li>
<li>First responder and driving continuous improvement based on root cause analysis.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>5+ year experience in managing distributed, scalable and resilient high-performing systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#/.NET experience, Experience implementing data and infrastructure security best practices, Experience with container workload technologies such as Kubernetes, Helm and Docker, Experience with monitoring/observability systems such as Prometheus, Grafana and/or Datadog, Experience with continuous integration and delivery, using pipeline automation systems such as Jenkins, GitLab and GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Developer-II/212007</Applyto>
      <Location>Montreal</Location>
      <Country></Country>
      <Postedate>2026-02-06</Postedate>
    </job>
  </jobs>
</source>