<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1f2f48ad-46d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a3dc5a7-8d9</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>
<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>
<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>
<p>Responsibilities:</p>
<p>Data Modeling &amp; Semantics</p>
<ul>
<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>
<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>
<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>
<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>
</ul>
<p>AI-Ready Data &amp; Snowflake Ecosystem</p>
<ul>
<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>
<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>
<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>
</ul>
<p>Data Quality, Governance &amp; Trust</p>
<ul>
<li>Implement robust testing, validation, and documentation practices in dbt</li>
<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>
<li>Apply data governance best practices, including access controls, lineage, and auditability</li>
<li>Partner across teams to establish clear ownership and accountability for data assets</li>
</ul>
<p>Collaboration &amp; Delivery</p>
<ul>
<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>
<li>Support self-service analytics by building intuitive, reusable datasets</li>
<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>
<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>
</ul>
<p>AI and Data Mindset</p>
<ul>
<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>
<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>
<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>
<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>
<li>Strong SQL skills and experience building analytics-ready data models</li>
<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>
<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>
<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>
<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>
<li>Experience translating business requirements into clear, maintainable data logic</li>
<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>
<li>Experience with data quality, testing, and documentation best practices</li>
<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>
<li>Experience with BI tools such as Tableau or Looker</li>
<li>Strong communication skills and ability to work across technical and business teams</li>
</ul>
<p>What you can look forward to as an Okta employee!</p>
<ul>
<li>Amazing Benefits</li>
<li>Making Social Impact</li>
<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>
<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7818510</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4cc3bc0-a5d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Analytics Engineer</strong></p>
<p>You&#39;ll be part of a team that empowers you to do the best work of your life. As a Senior Analytics Engineer at ZoomInfo, you&#39;ll be responsible for building deep expertise in our company data pipeline architecture.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<p><strong>What You&#39;ll Do:</strong></p>
<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>
<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>
<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>
<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>
<li>Background contributing to</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data pipeline architecture, data transformation, ETL systems, data processing infrastructure, Python, SQL, data analysis, data manipulation, ambiguous data problems, data quality initiatives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides sales intelligence and go-to-market solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408633002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>438dbd4f-1a6</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>Senior Analytics Engineer</strong></p>
<p><strong>About the role</strong></p>
<p>As an Analytics Engineer, you&#39;ll be a key early member of our data function, responsible for building and evolving the analytics foundations that power product decision-making across the company. You&#39;ll work closely with Product, Analytics, and Engineering to turn raw product data into trusted, well-defined datasets, metrics, and data products that scale with the business.</p>
<p><strong>What you&#39;ll be doing</strong></p>
<ul>
<li>Partner with Product, Analytics, and Engineering to understand data needs and translate ambiguous questions into clear, scalable data models</li>
</ul>
<ul>
<li>Define, build, and maintain core dbt models that transform raw product data into canonical, well-documented datasets</li>
</ul>
<ul>
<li>Own metric definitions and transformation logic to ensure consistency, accuracy, and trust across reporting and analysis</li>
</ul>
<ul>
<li>Establish and uphold data quality standards, testing, and expectations around freshness and reliability</li>
</ul>
<ul>
<li>Work closely with Product Analysts to enable faster, higher-quality insights and decision-making</li>
</ul>
<ul>
<li>Support data consumption in tools like Amplitude and Omni, ensuring data is intuitive and easy to self-serve</li>
</ul>
<ul>
<li>Act as a subject-matter expert for analytics engineering, guiding best practices and helping others solve data problems</li>
</ul>
<ul>
<li>Contribute to shaping the future direction of our data stack as product complexity and scale increase</li>
</ul>
<p><strong>About the setup</strong></p>
<ul>
<li>⚒️ Stack: dbt, Snowflake, Amplitude, Omni</li>
</ul>
<ul>
<li>🌱 Early, high-impact role with real ownership over the analytics layer</li>
</ul>
<ul>
<li>🤝 Highly collaborative environment with product- and data-savvy stakeholders</li>
</ul>
<ul>
<li>🚀 Outcome-focused team where pragmatism and impact matter more than process</li>
</ul>
<p><strong>We&#39;d love to hear from you if</strong></p>
<ul>
<li>You have 6+ years of experience in analytics engineering or data engineering, ideally in product-led or high-growth environments</li>
</ul>
<ul>
<li>You have strong hands-on experience with dbt and enjoy designing modular, scalable, and well-tested data models</li>
</ul>
<ul>
<li>You write advanced, performant, and maintainable SQL</li>
</ul>
<ul>
<li>You can translate business and product requirements into robust data pipelines and metrics</li>
</ul>
<ul>
<li>You have a strong product mindset and understand how data and metrics influence product direction</li>
</ul>
<ul>
<li>You&#39;re comfortable operating across the stack and taking ownership end to end when needed</li>
</ul>
<ul>
<li>You care deeply about data quality, clarity, and trust</li>
</ul>
<ul>
<li>You&#39;re outcome-driven and can clearly articulate the impact your work has had on teams or the business</li>
</ul>
<p><strong>Our culture</strong></p>
<p>At Synthesia we&#39;re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions.</p>
<p><strong>The good stuff...</strong></p>
<ul>
<li>A hybrid or remote-friendly environment for candidates based in Europe. You can work fully remote if you&#39;re not local to an office or hybrid from London, Amsterdam, Munich, Zurich or Copenhagen offices.</li>
</ul>
<ul>
<li>A competitive salary + stock options</li>
</ul>
<ul>
<li>25 days of annual leave + public holidays (plus the option to take 5 days unpaid leave and carry 5 days over)</li>
</ul>
<ul>
<li>You will join an established company culture with optional regular socials and company retreats</li>
</ul>
<ul>
<li>Paid parental leave entitling primary caregivers to 16 weeks of full pay, and secondary 5 weeks of full pay</li>
</ul>
<ul>
<li>You can participate in a generous recruitment referral scheme if you help us to hire</li>
</ul>
<ul>
<li>The equipment you need to be successful in your role</li>
</ul>
<p>_You can see more about who we are and how we work here:_ _https://www.synthesia.io/careers_</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>A competitive salary + stock options</Salaryrange>
      <Skills>dbt, Snowflake, Amplitude, Omni, SQL, data engineering, product-led environments, modular data models, scalable data models, well-tested data models, data quality, data clarity, data trust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.io.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. The company is headquartered in London, with offices and teams across Europe and the US.</Employerdescription>
      <Employerwebsite>https://www.synthesia.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/c11f83bc-46db-4c7b-a2ea-38f2ace507ba</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>