<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>46f577e7-522</externalid>
      <Title>Staff Data Analyst</Title>
      <Description><![CDATA[<p>Honor Technology&#39;s mission is to change the way society cares for older adults. As a leader in aging care innovation, Honor provides the technology, tools, and services that empower older adults to live life on their own terms.</p>
<p>We&#39;re looking for a Staff Data Analyst to join our team. This role reports to the VP of Data and joins a team of five other analysts collaborating closely with stakeholders across the entire organization. We&#39;re looking for someone who is excited to jump into new problems and make an impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Solve problems that have real-world impact</li>
<li>Thrive in diverse, cross-functional environments, collaborating with partners across design, product, engineering, and operations</li>
<li>Live at the intersection of software and the real world, whether that&#39;s optimizing complex operational problems or tracing the lineage of a key metric through a dozen transformations</li>
<li>Share knowledge, mentor others, and contribute to a healthy, inclusive team culture</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of professional analytics experience with a track record of owning analytics systems and solving applied business problems</li>
<li>Strong stakeholder management: you are comfortable translating business needs into concrete requirements and communicate tradeoffs clearly</li>
<li>Excellent written and verbal communication skills</li>
<li>Deeply experienced with our analytics stack (Git, Fivetran, Redshift, DBT, Looker) or equivalent tools (and a desire to learn new ones!)</li>
<li>Passion about using your data intuition to navigate a sea of messy data, generate hypotheses, and implement solutions that directly impact the business</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Base pay is just a part of our total rewards program</li>
<li>Honor offers generous equity packages that increase with position level and responsibilities</li>
<li>A 401K with up to a 4% employer match</li>
<li>Medical, dental, and vision coverage including zero-cost plans for employees</li>
<li>Short-term disability, long-term disability, and life insurance are fully employer-paid with a voluntary additional life insurance option</li>
<li>A generous time-off program, mental health benefits, wellness program, and discount program</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$148,500-$165,000 USD</Salaryrange>
      <Skills>Git, Fivetran, Redshift, DBT, Looker, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults to live life on their own terms. It has a global franchise network and over 100,000 Care Pros.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8451598002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e503559e-cf7</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>
<p><strong>Job Description:</strong></p>
<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>
<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>
<p><strong>About Us:</strong></p>
<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>
<p><strong>Job Responsibilities:</strong></p>
<p>As part of this role, you will:</p>
<ul>
<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>
<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>
<li>Implement testing, observability, alerting, and disaster recovery for all services</li>
<li>Implement tracing, performance, and regression testing</li>
<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<p>The ideal candidate for the role has:</p>
<ul>
<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>
<li>Expertise with:</li>
</ul>
<ul>
<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>
<li>SQL, dbt, Python</li>
<li>OLAP / OLTP data modelling and architecture</li>
<li>Key-value stores: Redis, dynamoDB, or equivalent</li>
<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>
<li>API frameworks: FastAPI, Flask, etc.</li>
<li>Production ML Service experience</li>
<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>
</ul>
<p><strong>Total Rewards Package:</strong></p>
<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p><strong>Salary Range:</strong></p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $200,700 - $250,900</li>
<li>Canadian employees (any location): CAD 189,700 - 237,100</li>
</ul>
<p><strong>Diversity &amp; Belonging:</strong></p>
<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)</Salaryrange>
      <Skills>Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5639559004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>fb622500-15e</externalid>
      <Title>Data Scientist, Marketing</Title>
      <Description><![CDATA[<p>You will directly impact Replit&#39;s growth by turning user behavior into actionable insights that optimize our marketing efforts, improve conversion funnels, and drive sustainable revenue growth across our self-serve and enterprise segments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and analyse marketing experiments to optimise campaigns, messaging, and channel performance across email, paid ads, social, and content marketing.</li>
<li>Build attribution models and multi-touch conversion funnels to understand the customer journey from first touch to paid conversion.</li>
<li>Develop predictive models to identify high-intent prospects, optimise lead scoring, and improve targeting for paid acquisition campaigns.</li>
<li>Partner with marketing, growth, and revenue teams to translate business questions into rigorous analysis and clear recommendations.</li>
<li>Create self-service dashboards and automated reporting that surface key marketing metrics (CAC, LTV, ROAS, conversion rates) for go-to-market teams.</li>
<li>Build and maintain data pipelines that integrate marketing platforms (Google Ads, Meta, Iterable, Segment, etc.) with our product analytics.</li>
</ul>
<p><strong>Examples of what you could do</strong></p>
<ul>
<li>Build propensity models to identify which free users are most likely to convert to plans based on usage patterns and engagement signals.</li>
<li>Analyse cohort behaviour and retention patterns to optimise lifecycle marketing campaigns and reduce churn.</li>
<li>Develop segmentation models to personalise messaging and targeting for different user personas (students, hobbyists, professional developers, enterprise teams).</li>
<li>Build real-time alerting systems to flag anomalies in campaign performance or conversion metrics, automate bidding adjustments across platforms.</li>
</ul>
<p><strong>Required skills and experience</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Economics, or related field, OR equivalent real-world experience in data roles.</li>
<li>4+ years of experience in data science or related roles with a focus on marketing, growth, or business analytics.</li>
<li>Strong SQL skills and experience working with large datasets, particularly event-level user behaviour data, and designing ETL workflows using dbt</li>
<li>Proficiency in Python and data science libraries (pandas, scikit-learn, statsmodels, etc.).</li>
<li>Experience designing and analysing A/B tests and experiments, including statistical rigor around sample sizing, significance testing, and causal inference.</li>
<li>Experience building dashboards and visualisations (Looker, Tableau, Mode, or similar tools).</li>
<li>Ability to translate ambiguous business questions into structured analysis and communicate findings clearly to non-technical stakeholders.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.).</li>
<li>Background in growth analytics, marketing analytics, or conversion rate optimisation at a SaaS or PLG company.</li>
<li>Familiarity with marketing technology platforms (Google Analytics, Segment, Iterable, Marketo, HubSpot, etc.).</li>
<li>Experience with attribution modelling, marketing mix modelling, or incrementality testing.</li>
<li>Understanding of PLG (product-led growth) motions and self-serve conversion funnels.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience analysing freemium or usage-based pricing models.</li>
<li>Understanding of developer tools, collaborative coding environments, or technical products.</li>
<li>Experience with causal inference methods (difference-in-differences, synthetic control, propensity score matching).</li>
<li>Familiarity with customer data platforms (CDPs) and event tracking implementation.</li>
<li>Experience working with sales and customer success data to analyse expansion revenue and upsell opportunities.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $250K</Salaryrange>
      <Skills>SQL, Python, data science libraries (pandas, scikit-learn, statsmodels, etc.), ETL workflows using dbt, A/B tests and experiments, dashboard and visualisation tools (Looker, Tableau, Mode, etc.), modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.), growth analytics, marketing analytics, or conversion rate optimisation, marketing technology platforms (Google Analytics, Segment, Iterable, etc.), attribution modelling, marketing mix modelling, or incrementality testing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/c05749db-f413-4091-a95c-c8e0aa1b5630</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
  </jobs>
</source>