{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/cloud-composer"},"x-facet":{"type":"skill","slug":"cloud-composer","display":"Cloud Composer","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68a62835-66b"},"title":"Senior DevOps Engineer","description":"<p>We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines.</p>\n<p>The ideal candidate will have strong experience working with AWS and/or GCP, cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.</li>\n<li>Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).</li>\n<li>Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.</li>\n<li>Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.</li>\n<li>Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable</li>\n<li>Familiarity with workflow orchestration services such as Apache Airflow and Google Cloud Composer.</li>\n<li>Develop and maintain automation scripts and tooling using Python to support DevOps processes.</li>\n<li>Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.</li>\n<li>Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.</li>\n<li>Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.</li>\n<li>Ensure compliance with security policies and best practices in cloud environments.</li>\n<li>Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.</li>\n<li>Collaborate with cross-functional teams to improve development workflows and infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.</li>\n<li>Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.</li>\n<li>Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.</li>\n<li>Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.</li>\n<li>Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.</li>\n<li>Programming skills in Python for automation and scripting.</li>\n<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).</li>\n<li>Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.</li>\n<li>Experience with cost optimization strategies for cloud infrastructure.</li>\n<li>Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.</li>\n<li>Ability to work collaboratively in an agile environment and support multiple teams.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).</li>\n<li>Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).</li>\n<li>Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.</li>\n<li>Knowledge of service mesh technologies like Istio.</li>\n<li>Experience with GitOps workflows and Kubernetes-native tooling.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68a62835-66b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8496473002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","GCP","Kubernetes","Terraform","Jenkins","ArgoCD","GitHub Enterprise Actions","Python","Apache Airflow","Google Cloud Composer","CloudWatch","Prometheus","Grafana","Datadog"],"x-skills-preferred":["Data lake architectures","Big data processing frameworks","Event-driven architectures","Message queues","Workflow orchestration tools","Service mesh technologies","GitOps workflows","Kubernetes-native tooling"],"datePosted":"2026-04-24T12:19:32.227Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, GCP, Kubernetes, Terraform, Jenkins, ArgoCD, GitHub Enterprise Actions, Python, Apache Airflow, Google Cloud Composer, CloudWatch, Prometheus, Grafana, Datadog, Data lake architectures, Big data processing frameworks, Event-driven architectures, Message queues, Workflow orchestration tools, Service mesh technologies, GitOps workflows, Kubernetes-native tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bf7a45a5-8d0"},"title":"Analytics Scientist","description":"<p>We are seeking a highly motivated and technically skilled professional to join the Analytics Solutions Integration team within Credit Analytics. This position is ideal for professionals excited about implementing predictive models for multiple business functions, modernizing legacy processes, and applying advanced technologies to transform analytics delivery.</p>\n<p>As an Analytics Scientist, you will bridge the gap between analytical model development and production deployment, using SAS, Vendor tools on OnPrem and Cloud based applications. You will also be a key contributor in modernizing legacy mainframe-based batch testing processes through automation, dataset comparison frameworks, and summarization/reporting of the data.</p>\n<p>Responsibilities:</p>\n<p>Implement, validate, test, and Productionalize the predictive models and risk strategies across global platforms. Collaborate with Data Scientists, Business teams, and IT to ensure smooth transition of models from development to production. Design and implement automated pipelines that support batch testing workflows involving mainframe JCL, flat files, VSAM or legacy datasets. Develop reusable and repeatable automation for comparing and summarizing data between legacy and modernized systems for multiple business functions. Use GCP services such as BigQuery, PostgreSQL, Cloud Functions, Cloud Storage, Cloud Composer, Cloud Run, and Pub/Sub to build scalable workflows that support analytics delivery. Move mainframe outputs to cloud storage for processing and use SQL/Python/LLM-enhanced logic to analyze results. Identify opportunities to introduce automation, GenAI tooling, and workflow simplification and develop the Proof-of-Concepts to enhance delivery processes through automation. Provide data analysis, SQL/SAS/Python programming, and on-demand reporting aligned to business needs.</p>\n<p>Qualifications:</p>\n<p>Bachelor’s degree in computer science, Data Science, Information Systems, Engineering, or related field. 4–5 years of programming experience in object-oriented and procedural paradigms (Example: Java, SQL, and Python),  with experience preferably in Statistical Analysis System software (like&quot;SAS RealTime Decision Manager&quot;). Experience with Relational Database Management Systems (like DB2) 1–2 years hands-on experience with Google Cloud Platform (GCP) including BigQuery, PostgreSQL, Cloud Storage, Cloud Functions, etc. Familiarity with Waterfall, Agile, and PDO methodologies. Experience working with IT testing environments, regression testing, and automated validation. Strong understanding of modern automation frameworks and AI-powered tooling like Agentic AI.</p>\n<p>Even better you’ll have:</p>\n<p>Master’s degree. Experience integrating or automating processes involving mainframe legacy systems. Gen AI tooling - Agentic AI, Workflows &amp; Cloud integration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bf7a45a5-8d0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Credit Company","sameAs":"https://www.fordcredit.com/","logo":"https://logos.yubhub.co/fordcredit.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/61844","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"SG5-SG8\",   \"salaryMin\": 95000,   \"salaryMax\": 140000,   \"salaryCurrency\": \"USD\",   \"salaryPeriod\": \"year","x-skills-required":["SAS","Vendor tools","Google Cloud Platform","BigQuery","PostgreSQL","Cloud Functions","Cloud Storage","Cloud Composer","Cloud Run","Pub/Sub","SQL","Python","LLM-enhanced logic","Automation","GenAI tooling","Agentic AI","Waterfall","Agile","PDO methodologies","Relational Database Management Systems","DB2"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:19:20.554Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"SAS, Vendor tools, Google Cloud Platform, BigQuery, PostgreSQL, Cloud Functions, Cloud Storage, Cloud Composer, Cloud Run, Pub/Sub, SQL, Python, LLM-enhanced logic, Automation, GenAI tooling, Agentic AI, Waterfall, Agile, PDO methodologies, Relational Database Management Systems, DB2","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":95000,"maxValue":140000,"unitText":"YEAR"}}}]}