<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0b1fb5b7-d63</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Data Platform Engineer to join our team. As a Data Platform Engineer, you will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.</p>
<p>Key responsibilities include:
Building for Scale: You will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.
Mastering the Orchestration: You’ll dive deep into Kubernetes, leveraging Operators and Helm to automate complex data workflows and platform management. Building out kube native data and AI architecture.
Bridging the Clouds: You will improve our existing tooling and implement new, seamless integrations between our AWS and GCP environments.
Defining our State: You’ll use Terraform to manage and define our entire data infrastructure through code, ensuring reproducibility and transparency across the stack.</p>
<p>Requirements:
K8s Expertise: You have a solid understanding and practical experience with Kubernetes, specifically working with Operators and Helm to manage complex application lifecycles.
The Engineer&#39;s Mindset: You are proficient in Python or Java and enjoy writing clean, efficient code to solve infrastructure challenges.
Cloud Native: You are comfortable working in at least one of the major cloud providers (AWS or GCP) and understand how to get the best out of their managed services.
Optimising and refine: current data infrastructure, and deploying greenfield kube native OSS projects</p>
<p>Bonus points if you have:
Experience with SQL-based transformation workflows, specifically using dbt within BigQuery.
Familiarity with streaming and ingestion tech like Kafka or Debezium.
A background in Linux administration or data management best practices.</p>
<p>Interview process:
Interviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you!
Our interviews are conversational and we want to get the best from you, so come with questions and be curious.
In general, you can expect the below, following a chat with one of our Talent Team:
Stage 1 - 30 minutes with one of the team
Stage 2 - Take-home challenge
Stage 3 - 60 minutes technical interview with two team members
Stage 4 - 45 minutes final with two data executives</p>
<p>Benefits:
25 days holiday (plus take your public holiday allowance whenever works best for you)
An extra day’s holiday for your birthday
Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off
16 hours paid volunteering time a year
Salary sacrifice, company-enhanced pension scheme
Life insurance at 4x your salary &amp; group income protection
Private Medical Insurance with VitalityHealth including mental health support and cancer care.
Partner benefits include discounts with Waitrose, Mr&amp;Mrs Smith and Peloton
Generous family-friendly policies
Perkbox membership giving access to retail discounts, a wellness platform for physical and mental health, and weekly free and boosted perks
Access to initiatives like Cycle to Work, Salary Sacrificed Gym partnerships and Electric Vehicle (EV) leasing</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, Java, Terraform, AWS, GCP, SQL, dbt, BigQuery, Kafka, Debezium, Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank operating in the UK, employing over 3,000 people across multiple locations.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/1EA5EDDAD9</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>42c9cfa4-8e3</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p><strong>Data Platform Engineer</strong></p>
<p>Join AVL and make a direct impact on shaping the future of Data, AI, and Mobility.</p>
<p><strong>Your Responsibilities:</strong></p>
<ul>
<li>Review and stabilise existing platform implementations (Databricks, Foundry – pipelines, Ontology schemas, Workshop applications, Functions, notebooks).</li>
<li>Identify performance bottlenecks, technical debt, and governance gaps across data pipelines and application layers.</li>
<li>Lead Ontology governance and design reviews, acting as a gatekeeper for all schema changes (Object Types, Links, Properties, Actions).</li>
<li>Define and document target data architectures (ingestion, transformation, and consumption layers).</li>
<li>Establish coding standards, naming conventions, repository structures, and Function versioning policies.</li>
<li>Enforce code reviews and technical validation before production deployment through Foundry Branching and Proposal workflows.</li>
<li>Define and implement a structured testing strategy (unit tests for Functions, integration tests, data quality checks, pipeline expectations).</li>
<li>Design and improve CI/CD pipelines and Dev/Test/Prod promotion processes using Foundry Marketplace/DevOps.</li>
<li>Automate deployments, rollbacks, and environment configurations.</li>
<li>Create and maintain architecture documentation (ADRs, data lineage diagrams, Ontology schemas, data flow diagrams).</li>
<li>Design reusable Workshop component libraries, custom widgets, and Slate application patterns.</li>
<li>Design and validate new platform solutions aligned with strategy, security, and governance requirements.</li>
<li>Mentor the development team on architectural thinking and platform best practices (40% hands-on coding, 60% architecture/leadership).</li>
</ul>
<p><strong>Your Profile:</strong></p>
<ul>
<li>Master’s degree in Computer Science, Data Engineering, or a related field.</li>
<li>5+ years of experience in data engineering or platform architecture roles.</li>
<li>Strong expertise in modern data platforms (Databricks, Snowflake, AWS Glue, Azure Synapse, or similar). Foundry experience is strongly preferred but not required.</li>
<li>Advanced skills in Python (PySpark), SQL (Spark SQL), and TypeScript for backend logic and application development.</li>
<li>Experience with distributed data processing (Spark architecture, partitioning strategies, performance optimisation).</li>
<li>Strong understanding of relational databases (PostgreSQL, Oracle, or similar).</li>
<li>Experience with CI/CD workflows, Git branching strategies, and automated testing in data environments.</li>
<li>Solid experience in end-to-end ETL and data transformation processes.</li>
<li>Proven experience in performance optimisation and scalable architecture design.</li>
<li>Experience in defining development standards, interface contracts, and engineering best practices.</li>
<li>Hands-on coding mindset: must write production code daily, not only review or document.</li>
<li>Structured, analytical, and documentation-oriented approach.</li>
<li>Strong communication and technical leadership skills, with very good proficiency in English and French.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>
<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>
<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>
<li>A team where your input and technical decisions truly matter.</li>
<li>A competitive package and benefits.</li>
</ul>
<p><strong>How to Apply:</strong></p>
<p>If you have these qualifications and are looking for a new challenge, we encourage you to apply to discuss it further!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Foundry, Python, SQL, TypeScript, Spark, PostgreSQL, CI/CD, Git, ETL, performance optimisation, scalable architecture design, cloud-native ML, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>AVL</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL is a leading mobility technology company that provides concepts, solutions, and methodologies in fields like vehicle development and integration, e-mobility, automated and connected mobility, and software.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Sala-Al-Jadida-Data-Platform-Engineer/1365823133/</Applyto>
      <Location>Sala Al Jadida, MA</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>