{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/etl-tools"},"x-facet":{"type":"skill","slug":"etl-tools","display":"Etl Tools","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bebb6dc-380"},"title":"Staff Software Engineer, Platform","description":"<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>\n<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>\n<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>\n<p>Impact and Responsibilities:</p>\n<ul>\n<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>\n</ul>\n<ul>\n<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>\n</ul>\n<ul>\n<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>\n</ul>\n<ul>\n<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>\n</ul>\n<p>Ideally you’d have:</p>\n<ul>\n<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>\n</ul>\n<ul>\n<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>\n</ul>\n<ul>\n<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>\n</ul>\n<ul>\n<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>\n</ul>\n<ul>\n<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>\n</ul>\n<ul>\n<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>\n</ul>\n<ul>\n<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>\n</ul>\n<ul>\n<li>Experience scaling products at hyper-growth startups.</li>\n</ul>\n<ul>\n<li>Excitement to work with AI technologies.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bebb6dc-380","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4649893005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["Software development","Distributed systems","Public cloud platforms","Containerization & deployment technologies","Orchestration platforms","NoSQL document databases","Structured databases","Software engineering best practices","CI/CD tooling"],"x-skills-preferred":["Data warehouses","Data pipeline/ETL tools","Scaling products at hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T16:00:12.545Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software development, Distributed systems, Public cloud platforms, Containerization & deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1869fa15-51d"},"title":"Software Engineer, Platform","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>\n<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>\n<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>\n<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>\n<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1869fa15-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4594879005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["software development","distributed systems","public cloud platforms","containerization & deployment technologies","orchestration platforms","NoSQL document databases","structured databases","software engineering best practices","CI/CD tooling"],"x-skills-preferred":["data warehouses","data pipeline/ETL tools","authentication/authorization systems","scaling products at hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T15:57:02.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, distributed systems, public cloud platforms, containerization & deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09a4d1ce-cde"},"title":"Data Engineer","description":"<p>We are looking for an experienced Data Engineer to partner with our Data Science and Data Infrastructure teams to own and scale our data pipelines. You&#39;ll also work closely with stakeholders across business teams including sales, marketing, and finance to ensure that the data they need arrives promptly and reliably.</p>\n<p>As a Data Engineer at Figma, you will be responsible for building and maintaining scalable data pipelines that connect various cloud data sources. You will develop a deep understanding of Figma&#39;s core data models and optimize data pipelines for scale. You will partner with the Data Science and Data Infrastructure teams to build new foundational data sets that are trusted, well understood, and enable self-service.</p>\n<p>You will work with a wide range of cross-functional stakeholders to derive requirements and architect shared datasets; ability to document, simplify and explain complex problems to different types of audiences. You will establish best practices for the development of specialized data sets for analytics and modeling.</p>\n<p>We&#39;d love to hear from you if you have:</p>\n<ul>\n<li>4+ years in a relevant field.</li>\n<li>Fluency with both SQL and Python.</li>\n<li>Familiarity with Snowflake, dbt, Dagster, and ETL/reverse ETL tools.</li>\n<li>Excellent judgment and creative problem-solving skills.</li>\n<li>A self-starting mindset along with strong communication and collaboration skills.</li>\n</ul>\n<p>While not required, it&#39;s an added plus if you also have:</p>\n<ul>\n<li>Knowledge in data modeling methodologies to design and build robust data architectures for insightful analytics.</li>\n<li>Experience with business systems such as Salesforce, Customer IO, Stripe, NetSuite is a big plus.</li>\n</ul>\n<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09a4d1ce-cde","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5220003004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$348,000 USD","x-skills-required":["SQL","Python","Snowflake","dbt","Dagster","ETL/reverse ETL tools"],"x-skills-preferred":["data modeling methodologies","business systems such as Salesforce, Customer IO, Stripe, NetSuite"],"datePosted":"2026-04-18T15:51:04.727Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA • New York, NY • United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Snowflake, dbt, Dagster, ETL/reverse ETL tools, data modeling methodologies, business systems such as Salesforce, Customer IO, Stripe, NetSuite","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":348000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_50808499-c0b"},"title":"Senior Customer Solutions Resident Architect","description":"<p>About Us</p>\n<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</p>\n<p>As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>\n<p>We’re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter. At our core, we believe in empowering data practitioners:</p>\n<ul>\n<li>Reliable, high-quality data is the fuel that propels AI-powered data engineering.</li>\n</ul>\n<ul>\n<li>AI is changing data work, fast. dbt’s data control plane keeps data engineers ahead of that curve.</li>\n</ul>\n<ul>\n<li>We empower engineers to deliver reliable, governed data faster, cheaper, and at scale.</li>\n</ul>\n<p>dbt Labs is now synonymous with analytics engineering, defining the modern data stack and serving as the data control plane for enterprise teams around the world. And we’re just getting started..</p>\n<p>We’re growing fast and building a team of passionate, curious people across the globe. Learn more about what makes us special by checking out our values.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking an experienced Senior Customer Solutions Resident Architects to join our team. In this role, you will drive critical customer outcomes by delivering high-impact technical guidance to strategic accounts. You will be part of a high-visibility initiative that supports pre-sales, accelerates adoption, enables key migrations, and mitigates churn risks.</p>\n<p>This role is designed to deploy RA-level expertise flexibly, aligning with customer and business needs to drive growth, retention, and expansion.</p>\n<p><strong>What You’ll Do</strong></p>\n<ul>\n<li>Accelerate Customer Success Across the Lifecycle</li>\n</ul>\n<ul>\n<li>Support strategic pre-sales opportunities by providing technical expertise to prospects</li>\n</ul>\n<ul>\n<li>Assist in launching and onboarding new customers who have not purchased RA services, ensuring they successfully adopt dbt Cloud</li>\n</ul>\n<ul>\n<li>Execute proactive adoption plays, including migrations, new feature implementations (e.g., Semantic Layer, Mesh), and major version upgrades</li>\n</ul>\n<ul>\n<li>Lead reactive adoption initiatives to de-risk churn or contraction and position accounts for future growth</li>\n</ul>\n<ul>\n<li>Deliver Technical Excellence</li>\n</ul>\n<ul>\n<li>Advise on architecture, design, implementation, troubleshooting, and best practices in dbt Cloud environments</li>\n</ul>\n<ul>\n<li>Build solution MVPs and guide long-term technical strategies tailored to customer needs</li>\n</ul>\n<ul>\n<li>Engage on multiple projects simultaneously with clear scoping, start and end dates, and outcome tracking</li>\n</ul>\n<ul>\n<li>Collaborate Across Teams</li>\n</ul>\n<ul>\n<li>Partner closely with Customer Solutions Architects (CSAs), Sales, Solutions Architects, Training, and Support</li>\n</ul>\n<ul>\n<li>Provide feedback to Product and Engineering to improve customer experience and prioritize technical needs</li>\n</ul>\n<ul>\n<li>Champion customer success through thoughtful, transparent communication and cross-functional collaboration</li>\n</ul>\n<ul>\n<li>Advance Best Practices and Team Impact</li>\n</ul>\n<ul>\n<li>Help build out and refine this evolving function alongside the broader RA organization</li>\n</ul>\n<ul>\n<li>Track and manage capacity and engagement effectiveness similarly to other RA-led initiatives</li>\n</ul>\n<p><strong>What You’ll Need</strong></p>\n<ul>\n<li>5+ years of experience in technical customer-facing roles such as post-sales consulting, technical architecture, or solution delivery</li>\n</ul>\n<ul>\n<li>Expertise with at least one modern cloud data platform (Snowflake, Databricks, BigQuery, or Redshift)</li>\n</ul>\n<ul>\n<li>Hands-on experience deploying or configuring dbt Cloud, with at least 1 year working with dbt</li>\n</ul>\n<ul>\n<li>Strong proficiency in SQL; working knowledge of Python in analytics contexts preferred</li>\n</ul>\n<ul>\n<li>Comfort leading technical project delivery , managing scope, timelines, and stakeholder expectations across multiple simultaneous engagements</li>\n</ul>\n<ul>\n<li>Clear, concise communication skills for both technical and executive audiences</li>\n</ul>\n<ul>\n<li>A collaborative mindset , thriving in a remote, transparent, and highly cross-functional organization</li>\n</ul>\n<ul>\n<li>Willingness to travel 2–4 times per year for company-wide events</li>\n</ul>\n<p><strong>What Will Make You Stand Out</strong></p>\n<ul>\n<li>dbt Analytics Engineering Certification</li>\n</ul>\n<ul>\n<li>Ability to influence technical direction and build consensus across internal and customer teams</li>\n</ul>\n<ul>\n<li>Experience with traditional enterprise ETL tools (e.g., Informatica, Datastage, Talend) and how they relate to modern data workflows</li>\n</ul>\n<ul>\n<li>Familiarity with strategic sales or renewal processes, including proactive and reactive adoption efforts</li>\n</ul>\n<ul>\n<li>Proven success accelerating usage, adoption, and expansion in large, complex accounts</li>\n</ul>\n<p><strong>Remote Hiring Process</strong></p>\n<ul>\n<li>Interview with a Talent Acquisition Partner</li>\n</ul>\n<ul>\n<li>Interview with Hiring Manager</li>\n</ul>\n<ul>\n<li>Task</li>\n</ul>\n<ul>\n<li>Task Review</li>\n</ul>\n<ul>\n<li>Final Values Interview</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time with a culture that actively encourages time off</li>\n</ul>\n<ul>\n<li>401k plan with 3% guaranteed company contribution</li>\n</ul>\n<ul>\n<li>Comprehensive healthcare coverage</li>\n</ul>\n<ul>\n<li>Generous paid parental leave</li>\n</ul>\n<ul>\n<li>Health &amp; wellness stipend</li>\n</ul>\n<ul>\n<li>Flexible stipends for:</li>\n</ul>\n<ul>\n<li>Home office setup</li>\n</ul>\n<ul>\n<li>Learning and development</li>\n</ul>\n<ul>\n<li>Office space</li>\n</ul>\n<ul>\n<li>And more!</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process.</p>\n<p>In Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, and Austin, an alternate range may apply, as specified below.</p>\n<ul>\n<li>The typical starting salary range for this role in the specific locations listed is: $163,000 - $200,000</li>\n</ul>\n<ul>\n<li>The typical starting salary range for this role is: $146,000 - $180,000</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_50808499-c0b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4682381005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,000 - $180,000","x-skills-required":["modern cloud data platform","dbt Cloud","SQL","Python","technical project delivery","clear, concise communication skills"],"x-skills-preferred":["dbt Analytics Engineering Certification","traditional enterprise ETL tools","strategic sales or renewal processes","proven success accelerating usage, adoption, and expansion"],"datePosted":"2026-04-18T15:50:33.608Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"modern cloud data platform, dbt Cloud, SQL, Python, technical project delivery, clear, concise communication skills, dbt Analytics Engineering Certification, traditional enterprise ETL tools, strategic sales or renewal processes, proven success accelerating usage, adoption, and expansion","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146000,"maxValue":180000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_30da0df8-cc9"},"title":"F&S COE Analyst","description":"<p>We are seeking an analytically-driven Data Analyst to join our Finance &amp; Strategy team at Stripe. This role bridges the gap between data science and financial planning, requiring someone who can transform complex business data into actionable financial insights.</p>\n<p>You will build sophisticated dashboards, develop predictive models, and serve as the technical backbone for our FP&amp;A and GTM analytics initiatives. This is a unique opportunity for a data professional with financial acumen to directly influence strategic business decisions in a high-growth fintech environment.</p>\n<p><strong>Financial Data Analytics &amp; Modeling</strong></p>\n<ul>\n<li>Design, build, and maintain financial dashboards for FP&amp;A, Revenue Operations, and GTM teams using Tableau, Power BI, or Looker</li>\n<li>Develop automated financial reporting solutions that reduce manual effort and improve data accuracy</li>\n<li>Create sophisticated data models to support budgeting, forecasting, variance analysis, and scenario planning</li>\n<li>Build predictive models for revenue forecasting, customer lifetime value, churn analysis, and unit economics</li>\n</ul>\n<p><strong>Business Intelligence &amp; Reporting</strong></p>\n<ul>\n<li>Partner with Finance Business Partners and FP&amp;A teams to translate business requirements into technical solutions</li>\n<li>Design and implement data infrastructure for financial planning cycles (monthly/quarterly reviews, annual budgets, long-range planning)</li>\n<li>Develop self-service analytics capabilities enabling finance teams to access real-time business insights</li>\n<li>Create executive dashboards tracking key financial and operational metrics (ARR, bookings, retention, CAC, LTV)</li>\n</ul>\n<p><strong>Data Engineering &amp; Analytics Infrastructure</strong></p>\n<ul>\n<li>Write complex SQL queries to extract, transform, and analyze large datasets from multiple source systems</li>\n<li>Build ETL pipelines to integrate financial data from ERP, CRM, billing, and data warehouse systems</li>\n<li>Ensure data quality, consistency, and governance across financial reporting systems</li>\n<li>Optimize database performance and data architecture for scalability</li>\n</ul>\n<p><strong>Strategic Analysis &amp; Insights</strong></p>\n<ul>\n<li>Conduct deep-dive analyses on business performance, identifying trends, anomalies, and opportunities</li>\n<li>Support strategic initiatives through ad-hoc financial modeling and what-if scenario analysis</li>\n<li>Translate complex data findings into clear, actionable recommendations for leadership</li>\n<li>Collaborate with Data Science teams to develop advanced analytics and ML models for finance use cases</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>Advanced SQL proficiency (complex joins, window functions, CTEs, query optimization)</li>\n<li>Expert-level experience with at least one BI tool (Tableau, Power BI, Looker, or Qlik)</li>\n<li>Advanced Excel/Google Sheets skills (pivot tables, complex formulas, data modeling)</li>\n<li>Python or R for data analysis, automation, and statistical modeling</li>\n<li>Cloud data platforms (Snowflake, BigQuery, Redshift, Databricks)</li>\n<li>ETL tools (dbt, Airflow, Fivetran) and version control (Git)</li>\n</ul>\n<p><strong>Financial &amp; Business Acumen</strong></p>\n<ul>\n<li>Experience in data analytics within finance, FP&amp;A, or revenue operations functions</li>\n<li>Strong understanding of financial statements (P&amp;L, balance sheet, cash flow)</li>\n<li>Knowledge of key financial metrics: ARR, MRR, bookings, revenue recognition, CAC, LTV, gross margin, EBITDA</li>\n<li>Experience with financial planning processes: budgeting, forecasting, variance analysis, scenario modeling</li>\n<li>Understanding of SaaS/subscription business models and revenue recognition principles (ASC 606 preferred)</li>\n</ul>\n<p><strong>Analytical &amp; Problem-Solving</strong></p>\n<ul>\n<li>Proven ability to work with large, complex datasets and derive meaningful insights</li>\n<li>Experience building financial models and dashboards that drive executive decision-making</li>\n<li>Strong statistical analysis skills and understanding of data visualization best practices</li>\n<li>Track record of translating ambiguous business problems into structured analytical frameworks</li>\n</ul>\n<p><strong>Preferred Experience</strong></p>\n<ul>\n<li>Background in fintech, payments, B2B SaaS, or high-growth technology companies</li>\n<li>Experience supporting GTM analytics (sales forecasting, pipeline analysis, quota setting)</li>\n<li>Familiarity with finance systems: NetSuite, Anaplan, Adaptive Planning, Salesforce, Stripe Billing</li>\n<li>Exposure to data science methodologies and machine learning concepts</li>\n<li>Previous work in cross-functional environments collaborating with finance, data science, and business teams</li>\n</ul>\n<p><strong>Key Competencies</strong></p>\n<ul>\n<li>Business Acumen: Ability to understand complex business models and translate them into data requirements</li>\n<li>Technical Excellence: Deep technical skills with commitment to code quality and best practices</li>\n<li>Communication: Exceptional ability to explain technical concepts to non-technical stakeholders</li>\n<li>Stakeholder Management: Experience partnering with senior leaders and influencing through data</li>\n<li>Ownership Mindset: Self-directed with ability to manage multiple priorities and drive projects to completion</li>\n<li>Continuous Learning: Curiosity to learn new tools, techniques, and business domains</li>\n<li>Attention to Detail: Commitment to data accuracy and quality in high-stakes financial reporting</li>\n</ul>\n<p><strong>Education</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Finance, Economics, Statistics, Mathematics, Computer Science, Engineering, or related quantitative field</li>\n<li>Advanced degree (MBA, MS in Analytics/Data Science) or relevant certifications (CFA, CPA, data analytics certifications) a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_30da0df8-cc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7597624","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Tableau","Power BI","Looker","Python","R","Cloud data platforms","ETL tools","Version control"],"x-skills-preferred":["Machine learning","Data science","Finance systems","Data visualization"],"datePosted":"2026-03-31T18:15:28.979Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"SQL, Tableau, Power BI, Looker, Python, R, Cloud data platforms, ETL tools, Version control, Machine learning, Data science, Finance systems, Data visualization"}]}