{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/normalisation"},"x-facet":{"type":"skill","slug":"normalisation","display":"Normalisation","count":3},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8a68e8bd-dd5"},"title":"Consulting Architect - Observability","description":"<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>\n<p>You will translate business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack. You will lead end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation. You will partner with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</p>\n<p>You will provide technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles. You will collaborate cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement. You will capture and share best practices, lessons learned, and solution patterns across the Elastic Services community.</p>\n<p>You will guide customers in using Elastic Agents, Beats, Logstash time-series data ingestion, stream processing, and normalisation, and related technologies. You will design and implement custom dashboards, visualisations, and alerting for critical observability use cases in Kibana. You will optimise ingestion pipelines for performance, scalability, and resiliency at enterprise scale.</p>\n<p>You will have 5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains. You will have strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash. You will have knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</p>\n<p>You will have understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs. You will have experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code. You will have familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</p>\n<p>You will have proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale. You will have hands-on expertise in distributed systems and large-scale infrastructure. You will have ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</p>\n<p>You will have experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene. You will have strong communication and presentation skills, with experience engaging directly with customers. You will have a Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</p>\n<p>You will be comfortable working in highly distributed teams, both remote and on-site when needed. You may require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8a68e8bd-dd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7763314","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$133,100-$210,600 USD","x-skills-required":["observability","monitoring","time-series data ingestion","processing","pipelines","Elastic Agents","Beats","Logstash","messaging queues","Kafka","Redis","ingestion optimisation strategies","distributed tracing","metrics pipelines","log aggregation","anomaly detection","SLOs/SLIs","Kubernetes","cloud platforms","infrastructure as code","Elastic Common Schema","data parsing","normalisation","databases","Docker","VMs","Lucene"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:11.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, time-series data ingestion, processing, pipelines, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, ingestion optimisation strategies, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":133100,"maxValue":210600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_396fe53d-121"},"title":"Consulting Architect - Observability","description":"<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>\n<p>You&#39;ll collaborate with Elastic’s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Observability platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack.</li>\n<li>Leading end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation.</li>\n<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</li>\n<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles.</li>\n<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement.</li>\n<li>Capturing and sharing best practices, lessons learned, and solution patterns across the Elastic Services community.</li>\n<li>Contributing to internal enablement, mentoring, and a culture of continuous learning and collaboration</li>\n</ul>\n<p>Required skills include:</p>\n<ul>\n<li>5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains.</li>\n<li>Expertise in the Telecommunications domain, especially with Mobile networks and devices.</li>\n<li>Strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash.</li>\n<li>Knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</li>\n<li>Understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs.</li>\n<li>Experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code.</li>\n<li>Familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</li>\n<li>Proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale.</li>\n<li>Hands-on expertise in distributed systems and large-scale infrastructure.</li>\n<li>Ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</li>\n<li>Experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene.</li>\n<li>Strong communication and presentation skills, with experience engaging directly with customers.</li>\n<li>Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</li>\n<li>Comfortable working in highly distributed teams, both remote and on-site when needed.</li>\n<li>May require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_396fe53d-121","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7440232","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["observability","monitoring","Elastic Stack","time-series data ingestion","Elastic Agents","Beats","Logstash","messaging queues","Kafka","Redis","distributed tracing","metrics pipelines","log aggregation","anomaly detection","SLOs/SLIs","Kubernetes","cloud platforms","infrastructure as code","Elastic Common Schema","data parsing","normalisation","databases","Docker","VMs","Lucene"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:26.428Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, Elastic Stack, time-series data ingestion, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8ae6102f-700"},"title":"GRC Automation Engineering Lead","description":"<p><strong>About the Role</strong></p>\n<p>We are seeking a GRC Automation Lead to join our GRC organisation and build the technical foundation for how we scale our risk and compliance programs. In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems.</p>\n<p>This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic. The core challenge is a data problem: compliance information lives across dozens of systems—cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories—and your job is to design systems that bring it together, normalise it, and make it actionable.</p>\n<p>At Anthropic, you&#39;ll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort. You&#39;ll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments.</p>\n<p>Working closely with Security, IT, and Engineering teams, you&#39;ll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn&#39;t otherwise be possible.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead the team that establishes foundational GRC processes and architecture. Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.</li>\n</ul>\n<ul>\n<li>Build data pipelines that aggregate risk, control, and asset information from across our technology stack. This means solving hard data integration problems: mapping disparate schemas, handling inconsistent data quality, and creating unified views of compliance posture through dashboards and reporting tools.</li>\n</ul>\n<ul>\n<li>Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.</li>\n</ul>\n<ul>\n<li>Translate written policies and compliance requirements into policy-as-code—working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.</li>\n</ul>\n<ul>\n<li>Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.</li>\n</ul>\n<ul>\n<li>Design and deploy agentic AI workflows that extend team capacity, using Claude to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.</li>\n</ul>\n<ul>\n<li>Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines—working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.</li>\n</ul>\n<ul>\n<li>Build and lead the GRC Automation function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.</li>\n</ul>\n<ul>\n<li>Are a systems thinker first. You understand how complex environments work: how data flows between systems, where integration points exist, what breaks when systems don&#39;t talk to each other. Your strength is designing the right architecture and environment for security monitoring, not necessarily implementing it yourself.</li>\n</ul>\n<ul>\n<li>Have 5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation. We care about your ability to solve integration problems, not your programming language proficiency.</li>\n</ul>\n<ul>\n<li>Proficiency to write production level code in at least one programming language (e.g., Python, Rust, Go)</li>\n</ul>\n<ul>\n<li>Have a relentless focus on data integration: you understand how to pull data from multiple sources, normalise it, join it meaningfully, and surface insights. You&#39;re comfortable reasoning about messy, inconsistent data and designing systems that handle edge cases gracefully.</li>\n</ul>\n<ul>\n<li>Understand APIs and integration patterns conceptually: REST APIs, webhooks, authentication flows, polling vs. push architectures, and can evaluate systems based on how well they expose data and support automation, even if you&#39;re not writing the integration code yourself.</li>\n</ul>\n<ul>\n<li>Can work independently with minimal guidance, taking ownership of complex problems from design through implementation while managing ambiguity inherent in early-stage programs.</li>\n</ul>\n<ul>\n<li>Have strong analytical and problem-solving skills, with the ability to break down complex problems into manageable parts and develop creative solutions.</li>\n</ul>\n<ul>\n<li>Are able to communicate complex technical ideas to both technical and non-technical stakeholders, with a strong focus on collaboration and teamwork.</li>\n</ul>\n<ul>\n<li>Are passionate about staying up-to-date with industry trends and emerging technologies, with a willingness to learn and adapt to new tools and techniques.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8ae6102f-700","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4980335008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GRC","Automation","Data Pipelines","System Integrations","APIs","Integration Patterns","REST APIs","Webhooks","Authentication Flows","Polling vs. Push Architectures","Data Integration","Data Normalisation","Data Joining","Data Modelling","Data Analysis","Data Visualisation","Agile Methodologies","Scrum","Kanban","Continuous Integration","Continuous Deployment","Continuous Monitoring","Cloud Infrastructure","Identity Providers","HR Platforms","Ticketing Tools","Code Repositories","Version Control","CI/CD Pipelines","GRC Tools","Policy-as-Code","Automated Checks","Continuous Validation","Feedback Loops","Policy Implementation","Technical Controls","Policy Evolution","Infrastructure Realities","Compliance Requirements","Engineer Communication","Technical Ideas","Collaboration","Teamwork","Industry Trends","Emerging Technologies","Learning","Adaptation","New Tools","New Techniques"],"x-skills-preferred":["Python","Rust","Go","Java","C++","JavaScript","TypeScript","SQL","NoSQL","Cloud Computing","DevOps","Security","Compliance","Risk Management","Audit Programs","SOC 2","ISO","HIPAA","FedRAMP","GRC Platforms","GRC Tools","Policy Management","Compliance Management","Risk Management","Audit Management","Compliance Automation","GRC Automation","Policy Automation","Compliance Orchestration","Risk Orchestration","Audit Orchestration","Compliance Intelligence","Risk Intelligence","Audit Intelligence","Compliance Analytics","Risk Analytics","Audit Analytics","Compliance Reporting","Risk Reporting","Audit Reporting","Compliance Dashboarding","Risk Dashboarding","Audit Dashboarding"],"datePosted":"2026-03-08T13:43:53.373Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GRC, Automation, Data Pipelines, System Integrations, APIs, Integration Patterns, REST APIs, Webhooks, Authentication Flows, Polling vs. Push Architectures, Data Integration, Data Normalisation, Data Joining, Data Modelling, Data Analysis, Data Visualisation, Agile Methodologies, Scrum, Kanban, Continuous Integration, Continuous Deployment, Continuous Monitoring, Cloud Infrastructure, Identity Providers, HR Platforms, Ticketing Tools, Code Repositories, Version Control, CI/CD Pipelines, GRC Tools, Policy-as-Code, Automated Checks, Continuous Validation, Feedback Loops, Policy Implementation, Technical Controls, Policy Evolution, Infrastructure Realities, Compliance Requirements, Engineer Communication, Technical Ideas, Collaboration, Teamwork, Industry Trends, Emerging Technologies, Learning, Adaptation, New Tools, New Techniques, Python, Rust, Go, Java, C++, JavaScript, TypeScript, SQL, NoSQL, Cloud Computing, DevOps, Security, Compliance, Risk Management, Audit Programs, SOC 2, ISO, HIPAA, FedRAMP, GRC Platforms, GRC Tools, Policy Management, Compliance Management, Risk Management, Audit Management, Compliance Automation, GRC Automation, Policy Automation, Compliance Orchestration, Risk Orchestration, Audit Orchestration, Compliance Intelligence, Risk Intelligence, Audit Intelligence, Compliance Analytics, Risk Analytics, Audit Analytics, Compliance Reporting, Risk Reporting, Audit Reporting, Compliance Dashboarding, Risk Dashboarding, Audit Dashboarding"}]}