<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>68a62835-66b</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines.</p>
<p>The ideal candidate will have strong experience working with AWS and/or GCP, cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.</li>
<li>Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).</li>
<li>Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.</li>
<li>Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.</li>
<li>Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable</li>
<li>Familiarity with workflow orchestration services such as Apache Airflow and Google Cloud Composer.</li>
<li>Develop and maintain automation scripts and tooling using Python to support DevOps processes.</li>
<li>Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.</li>
<li>Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.</li>
<li>Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.</li>
<li>Ensure compliance with security policies and best practices in cloud environments.</li>
<li>Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.</li>
<li>Collaborate with cross-functional teams to improve development workflows and infrastructure.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.</li>
<li>Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.</li>
<li>Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.</li>
<li>Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.</li>
<li>Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.</li>
<li>Programming skills in Python for automation and scripting.</li>
<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).</li>
<li>Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.</li>
<li>Experience with cost optimization strategies for cloud infrastructure.</li>
<li>Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.</li>
<li>Ability to work collaboratively in an agile environment and support multiple teams.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).</li>
<li>Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).</li>
<li>Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.</li>
<li>Knowledge of service mesh technologies like Istio.</li>
<li>Experience with GitOps workflows and Kubernetes-native tooling.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, GCP, Kubernetes, Terraform, Jenkins, ArgoCD, GitHub Enterprise Actions, Python, Apache Airflow, Google Cloud Composer, CloudWatch, Prometheus, Grafana, Datadog, Data lake architectures, Big data processing frameworks, Event-driven architectures, Message queues, Workflow orchestration tools, Service mesh technologies, GitOps workflows, Kubernetes-native tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a technology company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8496473002</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
  </jobs>
</source>