{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/metrics-pipelines"},"x-facet":{"type":"skill","slug":"metrics-pipelines","display":"Metrics Pipelines","count":3},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_001ca0fe-fc3"},"title":"Senior Software Engineer, Server Fleet Infrastructure","description":"<p>At CoreWeave, we&#39;re looking for a senior software engineer to join our team and help us build scalable, high-performance computing systems that power the largest AI workloads in the world. As a senior software engineer, you will design and implement solutions to problems of scale for multi-site deployment and management of CoreWeave&#39;s global server hardware fleet. You will build and maintain backend services and APIs in Go or Python to interact with Kubernetes and other infrastructure systems. You will also develop provisioning services, automation workflows, and fleet management tools that span from bare metal to container orchestration.</p>\n<p>In this role, you will work closely with our team to ensure that our infrastructure is reliable, efficient, and scalable. You will participate in an on-call rotation and be responsible for resolving integration challenges across the entire infrastructure stack, from data center hardware to orchestration platforms.</p>\n<p>We&#39;re looking for someone with a strong background in software engineering, experience with Go and/or Python, and familiarity with CI/CD tools like Argo, Flux, and GitHub Actions. You should also have a strong understanding of Linux internals and experience designing, implementing, and monitoring Kubernetes operators for custom resource definitions.</p>\n<p>As a senior software engineer at CoreWeave, you will have the opportunity to work on a wide range of projects and contribute to the development of our cloud computing platform. You will be part of a collaborative and dynamic team that is passionate about building innovative solutions to complex problems.</p>\n<p>If you&#39;re a motivated and experienced software engineer looking for a new challenge, we encourage you to apply for this role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_001ca0fe-fc3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4553828006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $242,000","x-skills-required":["Go","Python","Kubernetes","CI/CD tools","Linux internals","Kubernetes operators"],"x-skills-preferred":["Infrastructure automation","Configuration management","Distributed cloud computing","Metrics pipelines","Custom alerts","Monitoring strategies"],"datePosted":"2026-04-18T15:49:27.251Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, CI/CD tools, Linux internals, Kubernetes operators, Infrastructure automation, Configuration management, Distributed cloud computing, Metrics pipelines, Custom alerts, Monitoring strategies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a68e8bd-dd5"},"title":"Consulting Architect - Observability","description":"<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>\n<p>You will translate business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack. You will lead end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation. You will partner with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</p>\n<p>You will provide technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles. You will collaborate cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement. You will capture and share best practices, lessons learned, and solution patterns across the Elastic Services community.</p>\n<p>You will guide customers in using Elastic Agents, Beats, Logstash time-series data ingestion, stream processing, and normalisation, and related technologies. You will design and implement custom dashboards, visualisations, and alerting for critical observability use cases in Kibana. You will optimise ingestion pipelines for performance, scalability, and resiliency at enterprise scale.</p>\n<p>You will have 5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains. You will have strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash. You will have knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</p>\n<p>You will have understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs. You will have experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code. You will have familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</p>\n<p>You will have proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale. You will have hands-on expertise in distributed systems and large-scale infrastructure. You will have ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</p>\n<p>You will have experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene. You will have strong communication and presentation skills, with experience engaging directly with customers. You will have a Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</p>\n<p>You will be comfortable working in highly distributed teams, both remote and on-site when needed. You may require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a68e8bd-dd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7763314","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$133,100-$210,600 USD","x-skills-required":["observability","monitoring","time-series data ingestion","processing","pipelines","Elastic Agents","Beats","Logstash","messaging queues","Kafka","Redis","ingestion optimisation strategies","distributed tracing","metrics pipelines","log aggregation","anomaly detection","SLOs/SLIs","Kubernetes","cloud platforms","infrastructure as code","Elastic Common Schema","data parsing","normalisation","databases","Docker","VMs","Lucene"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:11.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, time-series data ingestion, processing, pipelines, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, ingestion optimisation strategies, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":133100,"maxValue":210600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_396fe53d-121"},"title":"Consulting Architect - Observability","description":"<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>\n<p>You&#39;ll collaborate with Elastic’s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Observability platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack.</li>\n<li>Leading end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation.</li>\n<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</li>\n<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles.</li>\n<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement.</li>\n<li>Capturing and sharing best practices, lessons learned, and solution patterns across the Elastic Services community.</li>\n<li>Contributing to internal enablement, mentoring, and a culture of continuous learning and collaboration</li>\n</ul>\n<p>Required skills include:</p>\n<ul>\n<li>5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains.</li>\n<li>Expertise in the Telecommunications domain, especially with Mobile networks and devices.</li>\n<li>Strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash.</li>\n<li>Knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</li>\n<li>Understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs.</li>\n<li>Experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code.</li>\n<li>Familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</li>\n<li>Proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale.</li>\n<li>Hands-on expertise in distributed systems and large-scale infrastructure.</li>\n<li>Ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</li>\n<li>Experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene.</li>\n<li>Strong communication and presentation skills, with experience engaging directly with customers.</li>\n<li>Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</li>\n<li>Comfortable working in highly distributed teams, both remote and on-site when needed.</li>\n<li>May require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_396fe53d-121","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7440232","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["observability","monitoring","Elastic Stack","time-series data ingestion","Elastic Agents","Beats","Logstash","messaging queues","Kafka","Redis","distributed tracing","metrics pipelines","log aggregation","anomaly detection","SLOs/SLIs","Kubernetes","cloud platforms","infrastructure as code","Elastic Common Schema","data parsing","normalisation","databases","Docker","VMs","Lucene"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:26.428Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, Elastic Stack, time-series data ingestion, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene"}]}