{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/fivetran"},"x-facet":{"type":"skill","slug":"fivetran","display":"Fivetran","count":9},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21f5f6c3-734"},"title":"Data Engineer","description":"<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>\n<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>\n<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>\n<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>\n<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>\n<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>\n<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>\n<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>\n<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>\n<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>\n<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>\n<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>\n<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21f5f6c3-734","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Tellent","sameAs":"https://careers.tellent.com","logo":"https://logos.yubhub.co/careers.tellent.com.png"},"x-apply-url":"https://careers.tellent.com/o/data-engineer","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"EUR 70000–90000 / year","x-skills-required":["Data Engineering","Cloud environments","dbt","Airbyte/Fivetran","BigQuery","GCP ecosystem","Infrastructure-as-Code","Terraform","Airflow","Dagster","Python","SQL","CI/CD best practices","DevOps practices"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:06.548Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices","baseSalary":{"@type":"MonetaryAmount","currency":"EUR","value":{"@type":"QuantitativeValue","minValue":70000,"maxValue":90000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b447835-74a"},"title":"Senior DataOps Engineer - Revenue Management (all genders)","description":"<p><strong>Your future team</strong></p>\n<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>\n<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>\n<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>\n<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>\n<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>\n<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>\n<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b447835-74a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597559","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Terraform","Cloud platforms (AWS preferred)","ML model deployment (MLflow, SageMaker, or similar)"],"x-skills-preferred":["AI tools like Claude, Copilot, and Codex","Data Storage & Querying (S3, Redshift, Athena, DuckDB)","ML & Model Serving (MLflow, SageMaker, deployment APIs)","Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS)","Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools)","Ingestion (Kafka-based event systems, Airbyte, Fivetran)"],"datePosted":"2026-04-18T22:09:42.352Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage & Querying (S3, Redshift, Athena, DuckDB), ML & Model Serving (MLflow, SageMaker, deployment APIs), Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1aad838f-387"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>\n<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>\n<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>\n<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>\n</ul>\n<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p>To be successful in this role, you&#39;ll need:</p>\n<ul>\n<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>\n<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>\n<li>Deep experience with at least one of:</li>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>\n<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>\n<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>\n<li>Experience working in fintech, financial services, or highly regulated environments.</li>\n<li>Security engineering background with focus on data protection and access controls.</li>\n</ul>\n<p>Technologies We Use:</p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>\n<li>Storage: GCS, S3.</li>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>\n<li>Languages: Python, Go, SQL.</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1aad838f-387","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures"],"x-skills-preferred":["data warehousing","ETL/ELT pipelines","analytics infrastructure","data reliability","availability","cost efficiency","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"datePosted":"2026-04-18T15:52:47.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f86b2355-24d"},"title":"Incentive Compensation System Engineer","description":"<p>As an Incentive Compensation Systems Engineer, you will lead the technical implementation, configuration, and optimization of our incentive compensation management (ICM) platform. You will work closely with Revenue Operations, Data Engineering, Finance, Accounting, and HR to translate incentive compensation designs into scalable system configurations, ensuring accurate calculations, reliable integrations, and efficient workflows.</p>\n<p><strong>Responsibilities:</strong></p>\n<p><strong>ICM Platform Implementation &amp; Configuration</strong></p>\n<ul>\n<li>Lead the end-to-end implementation and configuration of our ICM platform, including plan modeling, crediting rules, quota management, and payout calculations</li>\n<li>Translate compensation plan documents into system logic, ensuring accurate and auditable calculation outputs</li>\n<li>Design and maintain system workflows for plan changes, exceptions, and adjustments</li>\n<li>Develop and execute testing strategies to validate system accuracy before each compensation cycle</li>\n<li>Provide technical guidance during plan design discussions, advising on system capabilities and constraints</li>\n</ul>\n<p><strong>Data Integration &amp; Architecture</strong></p>\n<ul>\n<li>Build and maintain integrations between the ICM platform and source systems (CRM, ERP, HRIS, billing platforms)</li>\n<li>Ensure data integrity across the compensation data pipeline, from opportunity data through final payout</li>\n<li>Partner with Data Engineering and Revenue Operations to define data requirements and resolve data quality issues</li>\n<li>Document data flows, transformation logic, and system dependencies</li>\n</ul>\n<p><strong>System Administration &amp; Optimization</strong></p>\n<ul>\n<li>Serve as the primary system administrator for the ICM platform, managing user access, security, and system health</li>\n<li>Identify opportunities to streamline and automate compensation processes, reducing manual intervention and cycle time</li>\n<li>Monitor system performance and troubleshoot calculation errors or integration failures</li>\n<li>Evaluate and recommend new technologies, including Claude-powered solutions, to enhance system capabilities</li>\n</ul>\n<p><strong>Reporting &amp; Analytics</strong></p>\n<ul>\n<li>Build and maintain reporting infrastructure within the ICM platform to support compensation dashboards and attainment tracking</li>\n<li>Partner with Revenue Operations and Accounting to deliver accurate, timely compensation data for forecasting and accruals</li>\n<li>Enable self-service reporting for compensation administrators and business partners</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>5+ years of experience implementing or administering ICM platforms (e.g., Xactly, Varicent, Anaplan)</li>\n<li>Strong technical skills with experience in SQL, data integration tools, and system configuration</li>\n<li>Proven ability to translate business requirements into system logic and maintain accurate, auditable calculations</li>\n<li>Experience managing data pipelines and ensuring data quality across integrated systems</li>\n<li>Strong problem-solving skills with a systematic approach to debugging and root cause analysis</li>\n<li>Excellent documentation practices and ability to communicate technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience with full ICM platform implementation projects, including vendor selection and data migration</li>\n<li>Proficiency with APIs and integration platforms (e.g., Workato, Fivetran, custom ETL)</li>\n<li>Experience with Salesforce administration or development</li>\n<li>Familiarity with consumption-based compensation models</li>\n<li>Experience with Claude Code or other AI-assisted development tools</li>\n<li>Background in sales operations, revenue operations, or finance systems</li>\n</ul>\n<p><strong>Logistics:</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f86b2355-24d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5141849008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data integration tools","system configuration","ICM platforms","Xactly","Varicent","Anaplan"],"x-skills-preferred":["APIs","integration platforms","Workato","Fivetran","custom ETL","Salesforce administration","development","consumption-based compensation models","Claude Code","AI-assisted development tools","sales operations","revenue operations","finance systems"],"datePosted":"2026-04-18T15:41:00.577Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ontario, CAN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data integration tools, system configuration, ICM platforms, Xactly, Varicent, Anaplan, APIs, integration platforms, Workato, Fivetran, custom ETL, Salesforce administration, development, consumption-based compensation models, Claude Code, AI-assisted development tools, sales operations, revenue operations, finance systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e503559e-cf7"},"title":"Senior Machine Learning Engineer","description":"<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>\n<p><strong>Job Description:</strong></p>\n<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>\n<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>\n<p><strong>About Us:</strong></p>\n<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>\n<p><strong>Job Responsibilities:</strong></p>\n<p>As part of this role, you will:</p>\n<ul>\n<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>\n<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>\n<li>Implement testing, observability, alerting, and disaster recovery for all services</li>\n<li>Implement tracing, performance, and regression testing</li>\n<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>\n</ul>\n<p><strong>Ideal Candidate:</strong></p>\n<p>The ideal candidate for the role has:</p>\n<ul>\n<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>\n<li>Expertise with:</li>\n</ul>\n<ul>\n<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>\n<li>SQL, dbt, Python</li>\n<li>OLAP / OLTP data modelling and architecture</li>\n<li>Key-value stores: Redis, dynamoDB, or equivalent</li>\n<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>\n<li>API frameworks: FastAPI, Flask, etc.</li>\n<li>Production ML Service experience</li>\n<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>\n</ul>\n<p><strong>Total Rewards Package:</strong></p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p><strong>Salary Range:</strong></p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $200,700 - $250,900</li>\n<li>Canadian employees (any location): CAD 189,700 - 237,100</li>\n</ul>\n<p><strong>Diversity &amp; Belonging:</strong></p>\n<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e503559e-cf7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5639559004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)","x-skills-required":["Snowflake","dbt","Fivetran","Airbyte","Dagster","Airflow","SQL","Python","OLAP / OLTP data modelling and architecture","Redis","dynamoDB","Kinesis","Kafka","Redpanda","FastAPI","Flask","Production ML Service experience","Haskell","React","TypeScript"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:45:16.566Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189700,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4475ebe1-e8a"},"title":"Data Engineering Intern","description":"<p>We&#39;re seeking a motivated and curious Data Engineering Intern to join our Data Platform team. This internship offers a unique opportunity to gain hands-on experience building and maintaining real data infrastructure within a fast-growing fintech environment.</p>\n<p>As a Data Engineering Intern, you&#39;ll collaborate on thoughtful projects and bring your fresh perspectives to impact our product and families. You&#39;ll assist in building and maintaining data pipelines using Airflow to orchestrate workflows that ingest, transform, and deliver data into Snowflake and Databricks.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Assist in building and maintaining data pipelines using Airflow to orchestrate workflows that ingest, transform, and deliver data into Snowflake and Databricks</li>\n<li>Support the design and implementation of data models in Snowflake that serve analytics, reporting, and ML use cases</li>\n<li>Help develop and maintain transformation logic using dbt, including writing models, tests, and documentation</li>\n<li>Contribute to data quality checks and validation processes to ensure accuracy, completeness, and timeliness of data</li>\n<li>Assist with infrastructure automation using Terraform to manage cloud resources in AWS</li>\n<li>Participate in troubleshooting data pipeline issues and investigating root causes alongside senior engineers</li>\n<li>Collaborate with data analysts, analytics engineers, and business stakeholders to understand requirements and contribute to technical solutions</li>\n<li>Help create and maintain documentation for data pipelines, data models, and infrastructure processes</li>\n<li>Participate in code reviews to develop best practices and learn from the team</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>A 3.0 GPA or higher</li>\n<li>Currently pursuing a Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Information Technology, or a related field</li>\n<li>Basic understanding of SQL and comfort manipulating data</li>\n<li>Interest in data engineering, data infrastructure, and/or analytics engineering</li>\n<li>Familiarity with Python for scripting, data processing, or automation (preferred)</li>\n<li>Basic understanding of cloud platforms, particularly AWS, is a plus</li>\n<li>Strong analytical thinking and problem-solving skills , especially comfort working through ambiguity</li>\n<li>Good communication and collaboration skills; able to work cross-functionally with technical and non-technical teammates</li>\n<li>Eagerness to take ownership of your work and ask thoughtful questions</li>\n</ul>\n<p>Learning Opportunities:</p>\n<ul>\n<li><p>You&#39;ll have the opportunity to gain experience with technologies including:</p>\n<ul>\n<li>Snowflake</li>\n<li>dbt (data build tool)</li>\n<li>Apache Airflow</li>\n<li>AWS (S3, Lambda, EC2, IAM)</li>\n<li>Databricks</li>\n<li>Terraform</li>\n<li>Fivetran</li>\n<li>Segment</li>\n</ul>\n</li>\n</ul>\n<p>This internship provides an excellent foundation for a career in data engineering, analytics engineering, or data architecture within the fintech industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4475ebe1-e8a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/b5d9d9b2-9d06-4db7-932c-30fd4a43825d","x-work-arrangement":"hybrid","x-experience-level":"intern","x-job-type":"internship","x-salary-range":null,"x-skills-required":["SQL","Python","Airflow","Snowflake","dbt","AWS","Databricks","Terraform","Fivetran","Segment"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:36:54.798Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Finance","skills":"SQL, Python, Airflow, Snowflake, dbt, AWS, Databricks, Terraform, Fivetran, Segment"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ace7478-7a2"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>\n<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>\n</ul>\n<ul>\n<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>\n</ul>\n<ul>\n<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>\n</ul>\n<ul>\n<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>\n</ul>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>\n</ul>\n<ul>\n<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Can set technical direction for a team, not just execute within it</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one of:</li>\n</ul>\n<ul>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>\n</ul>\n<ul>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>\n</ul>\n<ul>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>\n</ul>\n<ul>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>\n</ul>\n<ul>\n<li>Experience working in fintech, financial services, or highly regulated environments</li>\n</ul>\n<ul>\n<li>Security engineering background with focus on data protection and access controls</li>\n</ul>\n<p><strong>Technologies We Use:</strong></p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>\n</ul>\n<ul>\n<li>Storage: GCS, S3</li>\n</ul>\n<ul>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>\n</ul>\n<ul>\n<li>Languages: Python, Go, SQL</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ace7478-7a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures","data warehousing","ETL/ELT pipelines","analytics infrastructure","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"x-skills-preferred":["data governance","access control","cloud storage","reliability","data platform","tooling","self-service analytics","data processing infrastructure","query performance","cost management"],"datePosted":"2026-03-08T13:52:03.469Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25829001-a4a"},"title":"Incentive Compensation System Engineer","description":"<p>As an Incentive Compensation Systems Engineer, you will lead the technical implementation, configuration, and optimization of our incentive compensation management (ICM) platform. You will work closely with Revenue Operations, Data Engineering, Finance, Accounting, and HR to translate incentive compensation designs into scalable system configurations, ensuring accurate calculations, reliable integrations, and efficient workflows.</p>\n<p><strong>Responsibilities:</strong></p>\n<p><strong>ICM Platform Implementation &amp; Configuration</strong></p>\n<ul>\n<li>Lead the end-to-end implementation and configuration of our ICM platform, including plan modeling, crediting rules, quota management, and payout calculations</li>\n<li>Translate compensation plan documents into system logic, ensuring accurate and auditable calculation outputs</li>\n<li>Design and maintain system workflows for plan changes, exceptions, and adjustments</li>\n<li>Develop and execute testing strategies to validate system accuracy before each compensation cycle</li>\n<li>Provide technical guidance during plan design discussions, advising on system capabilities and constraints</li>\n</ul>\n<p><strong>Data Integration &amp; Architecture</strong></p>\n<ul>\n<li>Build and maintain integrations between the ICM platform and source systems (CRM, ERP, HRIS, billing platforms)</li>\n<li>Ensure data integrity across the compensation data pipeline, from opportunity data through final payout</li>\n<li>Partner with Data Engineering and Revenue Operations to define data requirements and resolve data quality issues</li>\n<li>Document data flows, transformation logic, and system dependencies</li>\n</ul>\n<p><strong>System Administration &amp; Optimization</strong></p>\n<ul>\n<li>Serve as the primary system administrator for the ICM platform, managing user access, security, and system health</li>\n<li>Identify opportunities to streamline and automate compensation processes, reducing manual intervention and cycle time</li>\n<li>Monitor system performance and troubleshoot calculation errors or integration failures</li>\n<li>Evaluate and recommend new technologies, including Claude-powered solutions, to enhance system capabilities</li>\n</ul>\n<p><strong>Reporting &amp; Analytics</strong></p>\n<ul>\n<li>Build and maintain reporting infrastructure within the ICM platform to support compensation dashboards and attainment tracking</li>\n<li>Partner with Revenue Operations and Accounting to deliver accurate, timely compensation data for forecasting and accruals</li>\n<li>Enable self-service reporting for compensation administrators and business partners</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>A passion for Anthropic&#39;s mission to build safe, transformative AI systems</li>\n<li>5+ years of experience implementing or administering ICM platforms (e.g., Xactly, Varicent, Anaplan)</li>\n<li>Strong technical skills with experience in SQL, data integration tools, and system configuration</li>\n<li>Proven ability to translate business requirements into system logic and maintain accurate, auditable calculations</li>\n<li>Experience managing data pipelines and ensuring data quality across integrated systems</li>\n<li>Strong problem solving skills with a systematic approach to debugging and root cause analysis</li>\n<li>Excellent documentation practices and ability to communicate technical concepts to non-technical stakeholders</li>\n<li>Bachelor&#39;s degree in a technical or quantitative field preferred</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Experience with full ICM platform implementation projects, including vendor selection and data migration</li>\n<li>Proficiency with APIs and integration platforms (e.g., Workato, Fivetran, custom ETL)</li>\n<li>Experience with Salesforce administration or development</li>\n<li>Familiarity with consumption-based compensation models</li>\n<li>Experience with Claude Code or other AI-assisted development tools</li>\n<li>Background in sales operations, revenue operations, or finance systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. <strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. <strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. <strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25829001-a4a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5141849008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data integration tools","system configuration","ICM platforms","Xactly","Varicent","Anaplan"],"x-skills-preferred":["APIs","integration platforms","Workato","Fivetran","custom ETL","Salesforce administration","development","consumption-based compensation models","Claude Code","AI-assisted development tools","sales operations","revenue operations","finance systems"],"datePosted":"2026-03-08T13:44:40.158Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Ontario"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data integration tools, system configuration, ICM platforms, Xactly, Varicent, Anaplan, APIs, integration platforms, Workato, Fivetran, custom ETL, Salesforce administration, development, consumption-based compensation models, Claude Code, AI-assisted development tools, sales operations, revenue operations, finance systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fb622500-15e"},"title":"Data Scientist, Marketing","description":"<p>You will directly impact Replit&#39;s growth by turning user behavior into actionable insights that optimize our marketing efforts, improve conversion funnels, and drive sustainable revenue growth across our self-serve and enterprise segments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and analyse marketing experiments to optimise campaigns, messaging, and channel performance across email, paid ads, social, and content marketing.</li>\n<li>Build attribution models and multi-touch conversion funnels to understand the customer journey from first touch to paid conversion.</li>\n<li>Develop predictive models to identify high-intent prospects, optimise lead scoring, and improve targeting for paid acquisition campaigns.</li>\n<li>Partner with marketing, growth, and revenue teams to translate business questions into rigorous analysis and clear recommendations.</li>\n<li>Create self-service dashboards and automated reporting that surface key marketing metrics (CAC, LTV, ROAS, conversion rates) for go-to-market teams.</li>\n<li>Build and maintain data pipelines that integrate marketing platforms (Google Ads, Meta, Iterable, Segment, etc.) with our product analytics.</li>\n</ul>\n<p><strong>Examples of what you could do</strong></p>\n<ul>\n<li>Build propensity models to identify which free users are most likely to convert to plans based on usage patterns and engagement signals.</li>\n<li>Analyse cohort behaviour and retention patterns to optimise lifecycle marketing campaigns and reduce churn.</li>\n<li>Develop segmentation models to personalise messaging and targeting for different user personas (students, hobbyists, professional developers, enterprise teams).</li>\n<li>Build real-time alerting systems to flag anomalies in campaign performance or conversion metrics, automate bidding adjustments across platforms.</li>\n</ul>\n<p><strong>Required skills and experience</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Economics, or related field, OR equivalent real-world experience in data roles.</li>\n<li>4+ years of experience in data science or related roles with a focus on marketing, growth, or business analytics.</li>\n<li>Strong SQL skills and experience working with large datasets, particularly event-level user behaviour data, and designing ETL workflows using dbt</li>\n<li>Proficiency in Python and data science libraries (pandas, scikit-learn, statsmodels, etc.).</li>\n<li>Experience designing and analysing A/B tests and experiments, including statistical rigor around sample sizing, significance testing, and causal inference.</li>\n<li>Experience building dashboards and visualisations (Looker, Tableau, Mode, or similar tools).</li>\n<li>Ability to translate ambiguous business questions into structured analysis and communicate findings clearly to non-technical stakeholders.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience with modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.).</li>\n<li>Background in growth analytics, marketing analytics, or conversion rate optimisation at a SaaS or PLG company.</li>\n<li>Familiarity with marketing technology platforms (Google Analytics, Segment, Iterable, Marketo, HubSpot, etc.).</li>\n<li>Experience with attribution modelling, marketing mix modelling, or incrementality testing.</li>\n<li>Understanding of PLG (product-led growth) motions and self-serve conversion funnels.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience analysing freemium or usage-based pricing models.</li>\n<li>Understanding of developer tools, collaborative coding environments, or technical products.</li>\n<li>Experience with causal inference methods (difference-in-differences, synthetic control, propensity score matching).</li>\n<li>Familiarity with customer data platforms (CDPs) and event tracking implementation.</li>\n<li>Experience working with sales and customer success data to analyse expansion revenue and upsell opportunities.</li>\n</ul>\n<p><strong>Full-Time Employee Benefits Include</strong></p>\n<ul>\n<li>Competitive Salary &amp; Equity</li>\n<li>401(k) Program with a 4% match</li>\n<li>Health, Dental, Vision and Life Insurance</li>\n<li>Short Term and Long Term Disability</li>\n<li>Paid Parental, Medical, Caregiver Leave</li>\n<li>Commuter Benefits</li>\n<li>Monthly Wellness Stipend</li>\n<li>Autonomous Work Environment</li>\n<li>In Office Set-Up Reimbursement</li>\n<li>Flexible Time Off (FTO) + Holidays</li>\n<li>Quarterly Team Gatherings</li>\n<li>In Office Amenities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fb622500-15e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/c05749db-f413-4091-a95c-c8e0aa1b5630","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180K - $250K","x-skills-required":["SQL","Python","data science libraries (pandas, scikit-learn, statsmodels, etc.)","ETL workflows using dbt","A/B tests and experiments","dashboard and visualisation tools (Looker, Tableau, Mode, etc.)"],"x-skills-preferred":["modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.)","growth analytics, marketing analytics, or conversion rate optimisation","marketing technology platforms (Google Analytics, Segment, Iterable, etc.)","attribution modelling, marketing mix modelling, or incrementality testing"],"datePosted":"2026-03-07T15:20:03.203Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, data science libraries (pandas, scikit-learn, statsmodels, etc.), ETL workflows using dbt, A/B tests and experiments, dashboard and visualisation tools (Looker, Tableau, Mode, etc.), modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.), growth analytics, marketing analytics, or conversion rate optimisation, marketing technology platforms (Google Analytics, Segment, Iterable, etc.), attribution modelling, marketing mix modelling, or incrementality testing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":250000,"unitText":"YEAR"}}}]}