<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>0a3dc5a7-8d9</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>
<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>
<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>
<p>Responsibilities:</p>
<p>Data Modeling &amp; Semantics</p>
<ul>
<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>
<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>
<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>
<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>
</ul>
<p>AI-Ready Data &amp; Snowflake Ecosystem</p>
<ul>
<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>
<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>
<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>
</ul>
<p>Data Quality, Governance &amp; Trust</p>
<ul>
<li>Implement robust testing, validation, and documentation practices in dbt</li>
<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>
<li>Apply data governance best practices, including access controls, lineage, and auditability</li>
<li>Partner across teams to establish clear ownership and accountability for data assets</li>
</ul>
<p>Collaboration &amp; Delivery</p>
<ul>
<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>
<li>Support self-service analytics by building intuitive, reusable datasets</li>
<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>
<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>
</ul>
<p>AI and Data Mindset</p>
<ul>
<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>
<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>
<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>
<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>
<li>Strong SQL skills and experience building analytics-ready data models</li>
<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>
<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>
<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>
<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>
<li>Experience translating business requirements into clear, maintainable data logic</li>
<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>
<li>Experience with data quality, testing, and documentation best practices</li>
<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>
<li>Experience with BI tools such as Tableau or Looker</li>
<li>Strong communication skills and ability to work across technical and business teams</li>
</ul>
<p>What you can look forward to as an Okta employee!</p>
<ul>
<li>Amazing Benefits</li>
<li>Making Social Impact</li>
<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>
<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7818510</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3048ccd4-7de</externalid>
      <Title>Data Analyst</Title>
      <Description><![CDATA[<p>We are seeking a Data Analyst to join our growing data team. As a Data Analyst at LayerZero, you will be at the forefront of shaping a rich data foundation for a company making a real impact in the web3 space. You will work closely with teams and leaders to uncover insights, drive decision-making, and fuel our next-generation products and services.</p>
<p>The successful candidate will dive headfirst into the world of crypto data, exploring on-chain wallets and contracts, block and transaction data, insights from in-house systems, and third-party intelligence. Your mission will be to combine these diverse datasets into rich, actionable data products for a broad group of stakeholders.</p>
<p>Key responsibilities include:
Leveraging and expanding our ever-growing Kimball dimensional model.
Writing SQL to create and expand insights in our in-house reporting solutions.
Collaborating with stakeholders across the organization to conduct ad-hoc explorations and analytics.
Being a key owner of data quality, building out insights that serve the data team itself.
Composing pipelines by writing SQL code to clean, combine, refine, and aggregate data into the insights the organization needs.
Collaborating on new datasets to ingest into our Snowflake data warehouse, working closely with data engineers on your team.
Not afraid of pushing code that supports tens of billions of dollars in daily transaction volume.</p>
<p>We are looking for someone with previous data analyst experience, likely with a bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Physics or related field, but we also consider and highly value equivalent practical experience.</p>
<p>Required skills include strong SQL knowledge and experience, proven track record in data modeling, statistics, and analytics, experience working with a broad range of stakeholders, and strong convictions weakly held.
Nice to have skills include experience with general programming, experience with Snowflake, experience building DAG-based data pipelines, experience with streaming real-time data pipelines, previous experience with blockchain technologies, smart contracts, and decentralized finance, experience with Kimball dimensional modeling, and working on a mid-to-large scale data stacks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, statistics, analytics, Snowflake, Kimball dimensional modeling, general programming, DAG-based data pipelines, streaming real-time data pipelines, blockchain technologies, smart contracts, decentralized finance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>LayerZero</Employername>
      <Employerlogo>https://logos.yubhub.co/layerzero.com.png</Employerlogo>
      <Employerdescription>LayerZero is a company founded in 2021, creating a community of cross-chain developers.</Employerdescription>
      <Employerwebsite>https://layerzero.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/layerzerolabs/jobs/5787956004</Applyto>
      <Location>Vancouver, BC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
  </jobs>
</source>