{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-contracts"},"x-facet":{"type":"skill","slug":"data-contracts","display":"Data Contracts","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d57ab2d-f3b"},"title":"Cloud Solution Architect","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow&#39;s transportation.</p>\n<p>If you&#39;re looking for the chance to leverage advanced technology to redefine the transportation landscape, enhance the customer experience, and improve people&#39;s lives: this is the opportunity for you. Join us and challenge your IT expertise and analytical skills to help create vehicles that are as smart as you are.</p>\n<p>To meet the growing needs of the Customer analytics business, the team is looking for a self-motivated, technically proficient individual to craft and shepherd coherent solutions. This will require collaboration with a range of stakeholders to clarify requirements, establish pragmatic approaches, and support and articulate decisions over time. You will join a cloud architecture team that works closely with engineering teams and other architects across the organisation.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Technical Requirements</strong></p>\n<ul>\n<li>Extensive experience with Google Cloud Platform (GCP), specifically BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner and Apigee.</li>\n</ul>\n<ul>\n<li>Security &amp; Networking: Strong understanding of cloud security protocols, IAM, encryption, and complex network topologies.</li>\n</ul>\n<ul>\n<li>Data Management: Proficiency in Enterprise Data Platforms, Data mesh architecture and data-driven architectural patterns.</li>\n</ul>\n<ul>\n<li>DevOps Tooling: Hands-on experience with GitHub, SonarQube, Checkmarx, and FOSSA.</li>\n</ul>\n<ul>\n<li>Software Engineering: Strong background in building Web Services and maintaining Clean Code standards.</li>\n</ul>\n<p><strong>Technical Leadership &amp; Strategy</strong></p>\n<ul>\n<li>System Design: Work with engineering teams to refine system designs, evangelising for horizontal scalability, resilience, and Clean Code compliance.</li>\n</ul>\n<ul>\n<li>Product Collaboration: Partner with Product Managers to decompose complex business needs into incremental, production-ready user stories within an Agile/Sprint methodology.</li>\n</ul>\n<ul>\n<li>Architectural Governance: Assess and document the rationale and tradeoffs for technical decisions; contribute to the broader Cloud Architecture team to improve global practices.</li>\n</ul>\n<ul>\n<li>DevOps Excellence: Utilise and improve CI/CD pipelines using GitHub and automated testing/security tools to maximise deployment efficiency and minimise risk.</li>\n</ul>\n<p><strong>Cloud, Networking &amp; Security</strong></p>\n<ul>\n<li>Secure Infrastructure: Serve as the primary architect for cloud solutions, ensuring &#39;Secure-by-Design&#39; principles are applied across Google Cloud services (Dataflow, Dataproc, CloudRun, CloudSQL, Spanner).</li>\n</ul>\n<ul>\n<li>Advanced Networking: Design and optimise cloud networking configurations, including VPCs, Service Controls, Load Balancing, and Private Service Connect to ensure high availability and low latency.</li>\n</ul>\n<ul>\n<li>Cyber Security Oversight: Integrate security scanning and compliance into the architecture (utilising Checkmarx, SonarQube, and FOSSA). Proactively address vulnerabilities in distributed systems and AI models (e.g., OWASP Top 10 for LLMs).</li>\n</ul>\n<ul>\n<li>API &amp; Data Contracts: Bolster &#39;Data as a Product&#39; practices by enforcing strict API standards and data contracts to ensure seamless, secure interoperability between services.</li>\n</ul>\n<ul>\n<li>FinOps &amp; Cost Optimisation: Drive fiscal responsibility by right-sizing GCP resources and optimising Generative AI architectures (token management/model selection) to maximise ROI.</li>\n</ul>\n<ul>\n<li>SRE &amp; Performance Tuning: Apply Site Reliability Engineering principles to ensure high availability, minimise system latency, and lead root-cause analysis for complex, distributed system failures.</li>\n</ul>\n<ul>\n<li>DevSecOps &amp; Problem Solving: Integrate security automation into CI/CD pipelines to ensure &#39;Secure-by-Design&#39; deployments while solving complex architectural trade-offs between speed, scale, and risk.</li>\n</ul>\n<ul>\n<li>Continuous Learning: Stay at the forefront of AI research, specifically regarding autonomous agents, prompt engineering etc</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>AI development tools and frameworks (e.g., LangChain, LangGraph, or Agent Dev Kit) to accelerate the delivery of intelligent applications.</li>\n</ul>\n<ul>\n<li>Agentic &amp; GenAI Design: Lead the architectural design of Agentic AI systems (multi-agent orchestration) and Generative AI solutions, including Retrieval-Augmented Generation (RAG) patterns and LLM integration.</li>\n</ul>\n<ul>\n<li>Kubernetes (GKE): Experience managing containerised workloads at scale.</li>\n</ul>\n<ul>\n<li>Kafka/Event-Driven Design: Experience with high-throughput messaging and event-driven architectures.</li>\n</ul>\n<ul>\n<li>MLOps: Familiarity with the end-to-end lifecycle of machine learning models in production.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<p><strong>You&#39;ll have...</strong></p>\n<ul>\n<li>Requires a bachelor&#39;s or foreign equivalent degree in computer science, information technology or a technology related field</li>\n</ul>\n<ul>\n<li>5+ years of Software engineering experience using Java or Python developing services (APIs, REST, etc.)</li>\n</ul>\n<ul>\n<li>2+ years of experience with Google Cloud Platform or other cloud service provider (AWS, Azure, etc.) and associated cloud components.</li>\n</ul>\n<ul>\n<li>Experience designing/architecting and running distributed systems in a production environment</li>\n</ul>\n<ul>\n<li>STRONG communications skills and cognitive agility - ability to engage in deep technical discussions with customers and peers, become a trusted technical advisor, and maintain good documentation</li>\n</ul>\n<p><strong>Even better, you may have...</strong></p>\n<ul>\n<li>Master&#39;s degree in computer science, electrical engineering or a closely related field of study</li>\n</ul>\n<ul>\n<li>Familiarity with a breadth of programming languages, platforms, and systems</li>\n</ul>\n<ul>\n<li>Experience with asynchronous messaging and eventually consistent system design</li>\n</ul>\n<ul>\n<li>An agile, pragmatic, and empirical mindset</li>\n</ul>\n<ul>\n<li>Critical thinking, decision-making and leadership aptitudes</li>\n</ul>\n<ul>\n<li>Good organisational and problem-solving abilities</li>\n</ul>\n<ul>\n<li>MDM, Entity Resolution, Customer Analytics and Marketing Analytics experience is a huge plus.</li>\n</ul>\n<p>You may not check every box, or your experience may look a little different from what we&#39;ve outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply!</p>\n<p><strong>As an established global company, we offer the benefit of choice. You can choose what your Ford future will look like: will your story span the globe, or keep you close to home? Will your career be a deep dive into what you love, or a series of new teams and new skills? Will you be a leader, a changemaker, a technical expert, a culture builder…or all of the above? No matter what you choose, we offer a work life that works for you, including:</strong></p>\n<ul>\n<li>Immediate medical, dental, and prescription drug coverage</li>\n</ul>\n<ul>\n<li>Flexible family care, parental leave, new parent ramp-up programs, subsidised back-up child care and more</li>\n</ul>\n<ul>\n<li>Vehicle discount programme for employees and family members, and management leases</li>\n</ul>\n<ul>\n<li>Tuition assistance</li>\n</ul>\n<ul>\n<li>Established and active employee resource groups</li>\n</ul>\n<ul>\n<li>Paid time off for individual and team community service</li>\n</ul>\n<ul>\n<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>\n</ul>\n<ul>\n<li>Paid time off and the option to purchase additional vacation time.</li>\n</ul>\n<p><strong>For a detailed look at our benefits, click here:</strong> Benefit Summary</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d57ab2d-f3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://corporate.ford.com/","logo":"https://logos.yubhub.co/corporate.ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62370","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$115,000-$192,900","x-skills-required":["Google Cloud Platform","BigQuery","Vertex AI","Dataflow","Dataproc","Cloud Run","CloudSQL","Spanner","Apigee","Security & Networking","IAM","Encryption","Complex Network Topologies","Data Management","Enterprise Data Platforms","Data Mesh Architecture","Data-Driven Architectural Patterns","DevOps Tooling","GitHub","SonarQube","Checkmarx","FOSSA","Software Engineering","Web Services","Clean Code Standards","System Design","Horizontal Scalability","Resilience","Clean Code Compliance","Product Collaboration","Agile/Sprint Methodology","Architectural Governance","Cloud Architecture","DevOps Excellence","CI/CD Pipelines","Automated Testing/Security Tools","Secure Infrastructure","Secure-by-Design Principles","Cloud Services","Advanced Networking","VPCs","Service Controls","Load Balancing","Private Service Connect","Cyber Security Oversight","Security Scanning","Compliance","Distributed Systems","AI Models","API & Data Contracts","Data as a Product","API Standards","Data Contracts","Seamless Interoperability","FinOps & Cost Optimisation","Fiscal Responsibility","GCP Resources","Generative AI Architectures","Token Management","Model Selection","ROI Maximisation","SRE & Performance Tuning","High Availability","System Latency","Root-Cause Analysis","DevSecOps & Problem Solving","Security Automation","Continuous Learning","AI Research","Autonomous Agents","Prompt Engineering","Kubernetes","Containerised Workloads","Kafka/Event-Driven Design","High-Throughput Messaging","Event-Driven Architectures","MLOps","Machine Learning Models","End-to-End Lifecycle"],"x-skills-preferred":["AI Development Tools","Frameworks","LangChain","LangGraph","Agent Dev Kit","Agentic & GenAI Design","Multi-Agent Orchestration","Generative AI Solutions","Retrieval-Augmented Generation","LLM Integration","Kubernetes (GKE)"],"datePosted":"2026-04-24T12:22:00.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Google Cloud Platform, BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner, Apigee, Security & Networking, IAM, Encryption, Complex Network Topologies, Data Management, Enterprise Data Platforms, Data Mesh Architecture, Data-Driven Architectural Patterns, DevOps Tooling, GitHub, SonarQube, Checkmarx, FOSSA, Software Engineering, Web Services, Clean Code Standards, System Design, Horizontal Scalability, Resilience, Clean Code Compliance, Product Collaboration, Agile/Sprint Methodology, Architectural Governance, Cloud Architecture, DevOps Excellence, CI/CD Pipelines, Automated Testing/Security Tools, Secure Infrastructure, Secure-by-Design Principles, Cloud Services, Advanced Networking, VPCs, Service Controls, Load Balancing, Private Service Connect, Cyber Security Oversight, Security Scanning, Compliance, Distributed Systems, AI Models, API & Data Contracts, Data as a Product, API Standards, Data Contracts, Seamless Interoperability, FinOps & Cost Optimisation, Fiscal Responsibility, GCP Resources, Generative AI Architectures, Token Management, Model Selection, ROI Maximisation, SRE & Performance Tuning, High Availability, System Latency, Root-Cause Analysis, DevSecOps & Problem Solving, Security Automation, Continuous Learning, AI Research, Autonomous Agents, Prompt Engineering, Kubernetes, Containerised Workloads, Kafka/Event-Driven Design, High-Throughput Messaging, Event-Driven Architectures, MLOps, Machine Learning Models, End-to-End Lifecycle, AI Development Tools, Frameworks, LangChain, LangGraph, Agent Dev Kit, Agentic & GenAI Design, Multi-Agent Orchestration, Generative AI Solutions, Retrieval-Augmented Generation, LLM Integration, Kubernetes (GKE)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":115000,"maxValue":192900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3cc71f96-6a2"},"title":"Analytics Engineer","description":"<p>Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world้องน It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform , and with it, the future global financial system.</p>\n<p>To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>\n<p>We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high-calibre colleagues, and who actively seeks feedback to keep levelling up.</p>\n<p>We want someone who will run towards, not away from, solving the company’s hardest problems.</p>\n<p>Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be.</p>\n<p>While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported.</p>\n<p>The Analytics Engineering team builds and maintains trusted data infrastructure across the organisation. This role focuses on building reliable, well-governed data foundations for compliance and regulatory reporting across Coinbase&#39;s international entities.</p>\n<p>As Coinbase&#39;s international presence grows, so does the volume and complexity of regulatory data requirements. Your mission is to build a platform: a reliable, automated, self-serve data foundation for regulatory reporting.</p>\n<p>You will start by developing deep expertise in the regulatory data landscape, then progressively build data models, automate recurring workflows, and enable compliance stakeholders to answer regulatory questions independently with confidence and accuracy.</p>\n<p>The goal is to transition from reactive, manual reporting toward a scalable platform that minimises manual data pulls work over time.</p>\n<p>We are actively evolving toward an AI-agent-driven data stack where humans write specs, AI generates data models, and agents handle analytics on clean data domains.</p>\n<p>You will be an early builder of this operating model within the regulatory reporting domain.</p>\n<p>What We Do:</p>\n<ul>\n<li>Build and maintain the Regulatory Data Mart as the single source of truth for regulatory reporting across Coinbase&#39;s international entities, handling the nuances of multi-entity, multi-jurisdiction reporting</li>\n</ul>\n<ul>\n<li>Enable compliance stakeholders to self-serve regulatory data needs rather than depending on ad-hoc data pulls</li>\n</ul>\n<ul>\n<li>Build AI-driven workflows that automate regulatory reporting while maintaining the highest accuracy standards</li>\n</ul>\n<ul>\n<li>Ensure Coinbase meets regulatory reporting SLAs and stays compliant across international markets</li>\n</ul>\n<ul>\n<li>Own end-to-end data quality through data contracts, validation checks, and monitoring</li>\n</ul>\n<p>What you’ll be doing (ie. job duties):</p>\n<ul>\n<li>Design and build data models that serve as the foundation for regulatory reporting, with the entity-level granularity and definitions regulators require</li>\n</ul>\n<ul>\n<li>Build and maintain data quality checks, monitoring, and data contracts; implement reconciliation against prior submissions</li>\n</ul>\n<ul>\n<li>Write specs and definitions that AI tools use to generate pipelines and answer regulatory questions accurately</li>\n</ul>\n<ul>\n<li>Standardize regulatory metric definitions, entity-specific nuances, and historical reporting decisions into durable, machine-readable formats</li>\n</ul>\n<ul>\n<li>Partner with Compliance, Legal, and Business Operations to translate regulatory requirements into data models, and with upstream Engineering to ensure accurate, well-contracted source data</li>\n</ul>\n<ul>\n<li>Shift progressively from handling regulatory data requests toward building automated, self-serve solutions that reduce manual effort over time</li>\n</ul>\n<ul>\n<li>Maintain clear, structured documentation of business logic optimised for AI agent consumption, so agents can reliably trace and query regulatory data end-to-end</li>\n</ul>\n<p>What We Look For in You:</p>\n<ul>\n<li>You think in systems, not tasks: When something is broken or a data request lands, you move toward it and ask “how do I make sure no one ever has to do this manually again?”</li>\n</ul>\n<ul>\n<li>You treat AI as your first collaborator: You break problems down, delegate execution to agents, and focus on what requires human judgment. You design data, metadata, and documentation so agents can reason over them without human translation.</li>\n</ul>\n<ul>\n<li>You learn fast in unfamiliar territory: You pull knowledge from people, docs, and data quickly and turn tribal knowledge into durable infrastructure. You understand that building the right platform means doing the work manually first to learn the patterns worth automating</li>\n</ul>\n<ul>\n<li>You choose accuracy over speed and it shows: Regulatory data cannot be wrong. Data contracts, validation frameworks, quality checks. You know that trust is earned through rigor, especially when agents are generating the pipelines</li>\n</ul>\n<ul>\n<li>You’re strong in the analytics engineering fundamentals: Advanced SQL, Python, data modelling (star/snowflake, OBTs, SCDs), pipeline development (dbt, Airflow), and modern warehouse architecture (Snowflake, Databricks).</li>\n</ul>\n<ul>\n<li>You operate independently: You proactively identify what needs to be done, and drive it forward without waiting for direction from your manager or stakeholders.</li>\n</ul>\n<p>Job #: P76809</p>\n<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>\n<p>Annual base salary range (excluding equity and bonus): £100,620-£111,800 GBP</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3cc71f96-6a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7843707","x-work-arrangement":"remote","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"£100,620-£111,800 GBP","x-skills-required":["Advanced SQL","Python","data modelling","pipeline development","modern warehouse architecture","AI-driven workflows","regulatory data","data quality","data contracts","validation checks","metadata","documentation"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:07:38.946Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - UK"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Advanced SQL, Python, data modelling, pipeline development, modern warehouse architecture, AI-driven workflows, regulatory data, data quality, data contracts, validation checks, metadata, documentation","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":100620,"maxValue":111800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f160b80e-77d"},"title":"eCommerce Data Analyst","description":"<p>We are seeking a highly analytical and impact-driven eCommerce Data Analyst to join our global eCommerce team. This role sits at the intersection of customer behavior, digital experience, and revenue performance, transforming data into actionable insights that drive growth across our websites.</p>\n<p>You will analyze full-funnel customer journeys across web, marketing, and CRM touchpoints, leveraging modern analytics platforms, cloud data warehouses, and A/B testing. The ideal candidate thrives in close collaboration with marketing, webstore, product, and engineering teams.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Leverage CDP and first-party data to build customer segments, cohorts, and lifecycle views, partnering with marketing and CRM teams to support audience activation, personalization, and campaign optimization.</li>\n<li>Conduct in-depth analysis of website traffic, user behavior, and eCommerce performance to identify trends, opportunities, and risks across the full customer funnel.</li>\n<li>Ideate, support, and analyze A/B and multivariate experiments, defining success metrics, interpreting results with statistical rigor, and recommending next steps based on business impact.</li>\n<li>Collaborate with cross-functional teams (marketing, webstore, product, and engineering) to develop and execute data-driven strategies for web and eCommerce initiatives.</li>\n<li>Define and document event tracking and measurement requirements aligned with business objectives, ensuring accurate and scalable data collection.</li>\n<li>Monitor and report on the effectiveness of promotional activities, marketing initiatives, and website performance, translating results into actionable insights.</li>\n<li>Build and maintain automated dashboards and self-service reporting, communicating insights through clear narratives, visualizations, and executive-ready summaries that proactively surface opportunities and risks.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>3+ years of experience in Commerce, web, or digital analytics</li>\n<li>Bachelor’s degree in Data Science, Statistics, Economics, Business Information Systems, or a related field.</li>\n<li>Proven experience analyzing consumer-facing digital products or eCommerce platforms.</li>\n<li>Strong understanding of eCommerce KPIs, digital marketing metrics, and customer lifecycle measurement.</li>\n<li>Must have hands-on experience with SQL, cloud data warehouse (e.g Snowflake), GA4, CDP (such as mParticle).</li>\n<li>Experience working with large, complex, and multi-source datasets.</li>\n<li>Excellent data visualization skills (Looker Studio, PowerBi) and ability to communicate complex insights effectively.</li>\n<li>Strong attention to detail and ability to manage multiple projects simultaneously.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience with AI-assisted analytics, forecasting, or anomaly detection.</li>\n<li>Knowledge of product analytics, personalization, or recommendation systems.</li>\n<li>Exposure to server-side tracking, event schemas, and data contracts.</li>\n<li>Experience working in global or multi-region eCommerce environments.</li>\n<li>AI-ecommerce experience</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f160b80e-77d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Corsair","sameAs":"https://www.corsair.com/","logo":"https://logos.yubhub.co/corsair.com.png"},"x-apply-url":"https://edix.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/8662","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$75,000—$110,000 USD","x-skills-required":["SQL","cloud data warehouse","GA4","CDP","data visualization","data analysis","A/B testing","multivariate experiments"],"x-skills-preferred":["AI-assisted analytics","forecasting","anomaly detection","product analytics","personalization","recommendation systems","server-side tracking","event schemas","data contracts"],"datePosted":"2026-03-10T13:05:43.510Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Milpitas, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, cloud data warehouse, GA4, CDP, data visualization, data analysis, A/B testing, multivariate experiments, AI-assisted analytics, forecasting, anomaly detection, product analytics, personalization, recommendation systems, server-side tracking, event schemas, data contracts","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":75000,"maxValue":110000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ef0c826-856"},"title":"Engineering Manager, Safeguards Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Anthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly — and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organisation access that data safely and ergonomically.</p>\n<p>As Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements — and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities</li>\n<li>Own the strategy and execution for porting the safeguards offline data stack — including PII storage and tooling — across new cloud and deployment environments as Anthropic expands</li>\n<li>Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints</li>\n<li>Drive tooling and architecture decisions that maximise data retention within the bounds of our privacy and compliance requirements</li>\n<li>Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)</li>\n<li>Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts</li>\n<li>Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination</li>\n<li>Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals</li>\n<li>Partner with recruiting to attract, hire, and retain strong engineering talent</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 4+ years of front-line engineering management experience</li>\n<li>Have a track record of leading teams that build and operate data infrastructure at scale</li>\n<li>Have hands-on software engineering experience as an individual contributor prior to moving into management</li>\n<li>Have a strong understanding of data privacy principles, PII handling, and compliance frameworks</li>\n<li>Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities</li>\n<li>Have experience working cross-functionally across infrastructure, product, and compliance or security teams</li>\n<li>Are clear and persuasive communicators, both in writing and in person</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience with multi-cloud or multi-region data portability, particularly in regulated environments</li>\n<li>Have built privacy-preserving data pipelines or interfaces for ML workloads</li>\n<li>Have experience with enterprise data contracts or zero data retention architectures</li>\n<li>Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data</li>\n<li>Have a passion for building diverse and inclusive teams</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ef0c826-856","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5103078008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000USD\n£325,000 - £390,000GBP","x-skills-required":["data infrastructure","data privacy","compliance frameworks","software engineering","team management","cross-functional collaboration","communication","data portability","multi-cloud","multi-region","regulated environments","privacy-preserving data pipelines","ML workloads","enterprise data contracts","zero data retention architectures"],"x-skills-preferred":["in-memory storage","compute for sensitive data","novel approaches to data processing","diverse and inclusive teams"],"datePosted":"2026-03-08T13:42:50.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK; New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, data privacy, compliance frameworks, software engineering, team management, cross-functional collaboration, communication, data portability, multi-cloud, multi-region, regulated environments, privacy-preserving data pipelines, ML workloads, enterprise data contracts, zero data retention architectures, in-memory storage, compute for sensitive data, novel approaches to data processing, diverse and inclusive teams","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6215398a-2c4"},"title":"Senior Software Engineer, Forward Deployed (U.S. Public Sector)","description":"<p><strong>About Invisible</strong></p>\n<p>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most.</p>\n<p>Our platform cleans, labels, and structures company data so it is ready for AI. It adapts models to each business and adds human expertise when needed, the same approach we have used to improve models for more than 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere.</p>\n<p>Our successes span industries, from supply chain automation for Swiss Gear to AI-enabled naval simulations with SAIC, and validating NBA draft picks for the Charlotte Hornets.</p>\n<p>Profitable for more than half a decade, Invisible reached $134M in revenue and ranked as the number two fastest growing AI company on the 2024 Inc. 5000. In September 2025, we raised $100M in growth capital to accelerate our mission of making AI actually work in the enterprise and to advance our platform technology.</p>\n<p><strong>About The Role</strong></p>\n<p>As a Senior Forward Deployed Engineer (FDE) for our U.S. Public Sector team at Invisible, you&#39;ll lead high-impact, AI-powered solutions that reshape how our clients operate their most critical workflows. You won’t just build and deploy — you’ll drive the strategy, architecture, and execution of end-to-end systems, working directly with client stakeholders and our internal delivery teams.</p>\n<p>This is a hybrid role: equal parts AI architect, hands-on engineer, and technical advisor. You’ll work on the front lines with ambitious clients, turning operational challenges into scalable AI workflows. You’ll be trusted to lead complex engagements, make architectural calls, and mentor others across technical and non-technical domains.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Scope, design, and lead implementation of AI-driven solutions in partnership with delivery teams and executive stakeholders</li>\n<li>Translate ambiguous workflows and business needs into repeatable systems and production-ready technical architectures</li>\n<li>Lead architecture design and trade-off discussions across performance, scalability, cost, and reliability</li>\n<li>Build usable systems from messy data and incomplete or evolving requirements</li>\n<li>Apply AI/ML solutions in highly regulated environments (e.g., defense, intelligence, healthcare, finance)</li>\n<li>Own projects end-to-end—from initial discovery and scoping through implementation, deployment, and post-launch iteration</li>\n<li>Build shared infrastructure, reusable components, and internal playbooks to improve delivery consistency and team velocity</li>\n<li>Mentor mid-level engineers and contribute to the development of forward-deployed AI engineering practices at Invisible</li>\n</ul>\n<p><strong>What We Need</strong></p>\n<ul>\n<li>Active U.S. Department of Defense Secret clearance or higher</li>\n<li>5+ years of software engineering experience, including work on data-intensive, ML, or backend systems</li>\n<li>Ability to work on-site 2–3 days per week at offices located in the greater Washington, D.C. and Reston, VA area</li>\n<li>Python &amp; ML/LLM frameworks: Hands-on experience with Python and modern ML/LLM tooling (e.g., Hugging Face, LangChain, OpenAI, Pinecone)</li>\n<li>Deployment &amp; infrastructure: Experience building and operating API-based services using Docker, FastAPI, Kubernetes, and major cloud platforms (AWS, GCP)</li>\n<li>Platform &amp; data management: Familiarity with workflow orchestration, pub/sub systems (e.g., Kafka), schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, and replay processes</li>\n<li>Experience leading requirements-gathering activities and translating stakeholder input into technical specifications</li>\n</ul>\n<p><strong>What’s In It For You</strong></p>\n<p>Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.</p>\n<p>For this position, the annual salary ranges by location are:</p>\n<p>Tier 2 Salary Range $164,000 – $240,000USD</p>\n<p>You can find more information about our geographic pay tiers here. During the interview process, your Invisible Talent Acquisition Partner will confirm which tier applies to your location. For candidates outside the U.S., compensation is adjusted to reflect local market conditions and cost of living.</p>\n<p>Bonuses and equity are included in offers above entry level. Final compensation is determined by a combination of factors, including location, job-related experience, skills, knowledge, internal pay equity, and overall market conditions. Because of this, every offer is unique. Additional details on total compensation and benefits will be discussed during the hiring process</p>\n<p><strong>What It&#39;s Like to Work at Invisible:</strong></p>\n<p>At Invisible, we’re not just redefining work—we’re reinventing it. We operate at the intersection of advanced AI and human ingenuity, pushing the boundaries of what’s possible to unlock productivity and scale. Ownership is at the core of everything we do. Here, you won’t just execute tasks—you’ll build, innovate, and shape the future alongside world-class clients pushing the boundaries of AI.</p>\n<p>We expect bold ideas, relentless drive, and the ability to turn ambiguity into opportunity. The pace is fast, the challenges are big, and the growth is unmatched. We’re not for everyone, and we’re okay with that. If you’re looking for predictable routines, this isn’t the place for you. But if you’re driven to create, thrive in dynamic environments, and want a front-row seat to the AI revolution, you’ll fit right in.</p>\n<p>_<strong>Country Hiring Guidelines:</strong>_ _Invisible is a hybrid organization with offices and team members located around the world. While some roles may offer remote flexibility, most positions involve in-office collaboration and are tied to specific locations. Any location-based requirements will be clearly outlined in the job description._</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6215398a-2c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Invisible Technologies","sameAs":"https://www.invisible.co/join-us/","logo":"https://logos.yubhub.co/invisible.co.png"},"x-apply-url":"https://job-boards.eu.greenhouse.io/invisibletech/jobs/4741723101","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$164,000 – $240,000USD","x-skills-required":["Python","ML/LLM frameworks","Docker","FastAPI","Kubernetes","AWS","GCP","workflow orchestration","pub/sub systems","schema governance","data contracts","Unity Catalog","Delta/ETL pipelines","replay processes"],"x-skills-preferred":["Hugging Face","LangChain","OpenAI","Pinecone"],"datePosted":"2026-03-06T12:12:41.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington DC–Baltimore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML/LLM frameworks, Docker, FastAPI, Kubernetes, AWS, GCP, workflow orchestration, pub/sub systems, schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, replay processes, Hugging Face, LangChain, OpenAI, Pinecone","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":164000,"maxValue":240000,"unitText":"YEAR"}}}]}