<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>561aa0f8-c82</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data.</p>
<p>Responsibilities</p>
<p>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</p>
<p>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</p>
<p>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</p>
<p>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</p>
<p>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</p>
<p>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</p>
<p>Requirements</p>
<p>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</p>
<p>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</p>
<p>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</p>
<p>Hands-on experience building and debugging data pipelines.</p>
<p>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</p>
<p>Strong analytical skills, with a focus on understanding how data is used to drive business value.</p>
<p>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</p>
<p>Desirable Skills</p>
<p>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</p>
<p>Experience deploying and managing services in Kubernetes.</p>
<p>Familiarity with data governance processes, privacy requirements, or auditability.</p>
<p>Interest in or knowledge of machine learning models and MLOps.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype></Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, Airflow, DBT, Data governance, Machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-26</Postedate>
    </job>
    <job>
      <externalid>4bd6b781-c75</externalid>
      <Title>Principal Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About the Team The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions.</p>
<p>About the Role We are looking for an experienced systems engineer with a deep expertise in data to help us expand, mature and maintain our data infrastructure. You’ll lead technical architecture and implementation to scale our data platform, manage access seamlessly while accounting for privacy and security, create data pipelines and data products, and build tools to automate accessibility and usefulness of data.</p>
<p>Responsibilities</p>
<ul>
<li>Define, design and execute strategic technical architecture for highly visible, highly critical data infrastructure at the company.</li>
<li>Lead the design and development tools and infrastructure to improve and scale our data infrastructure at Cloudflare.</li>
<li>Lead the design and development of data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Become a subject matter expert across both our data platforms, tools and infrastructures as well as our data itself to guide and enable stakeholders with data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Mentor and support junior engineers on the team, reinforcing a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>8+ years of experience as a software engineer with a focus on designing, building and scaling data infrastructure</li>
<li>Proven experience leading technical initiatives in a cross-functional context, working with multiple stakeholders and driving value delivery.</li>
<li>Extensive experience with data infrastructure at scale, including tools like Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, PostgreSQL</li>
<li>Background designing, building and debugging data pipelines at scale</li>
<li>Proficient using backend languages like Go, Python, Typescript, and Rust, along with SQL</li>
<li>Excellent analytical skills, with a focus on understanding data and how stakeholders use it to drive value</li>
<li>Strong communication skills, especially around articulating technical concepts for technical and non-technical audiences</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT</li>
<li>Experience deploying and managing infrastructure in Kubernetes</li>
<li>Experience with data governance platforms and processes, with a focus on privacy and auditability</li>
<li>Knowledge about machine learning models and MLOps</li>
</ul>
<p>What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, Airflow, DBT, data governance, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7488760?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-26</Postedate>
    </job>
    <job>
      <externalid>8ac0cd89-e81</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>You will write advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You will be familiar with the modern data stack, including dbt and Snowflake. You will have a growth mindset and eagerness to learn. You will exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You will practice excellent communication skills, and you will tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to haves include exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems, and familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, operational tools and business systems, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>e05b4dba-366</externalid>
      <Title>GTM Analytics Engineer</Title>
      <Description><![CDATA[<p>About Gusto At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers.</p>
<p>With teams in Denver, San Francisco, and New York, we support more than 500,000 small businesses nationwide and are building a workplace that reflects the people we serve.</p>
<p>All full-time employees receive competitive base pay, benefits, and equity (RSUs) , because everyone who helps build Gusto should share in its success. Offer amounts are determined by role, level, and location.</p>
<p>AI is a fundamental part of how work gets done at Gusto. We expect all team members to actively engage with AI tools relevant to their role and grow their fluency as the technology evolves. AI experience requirements vary by role and will be assessed during the interview process.</p>
<p>About the Role The Revenue Operations team at Gusto is a full-stack team responsible for data, analytics, and operational excellence in pursuit of scaling our revenue growth. The Go-To-Market Analytics team is one functional area, responsible for defining and owning the metrics for scaling our Sales teams.</p>
<p>Gusto is looking for a Revenue Analytics Engineer to build and maintain foundational data infrastructure that is crucial to growing and scaling our Sales efforts. In this role, you will partner with our GTM analysts to scope and deliver data products for an audience ranging from company and revenue leadership to operational and frontline sales.</p>
<p>In doing so, you will own significant portions of our sales data from end-to-end, focusing on the transformation, design, and visualization workflows. In addition, you will establish best practices for systems design and workflows to maximize usefulness of AI at scale.</p>
<p>This role will report to the Head of Go-To-Market analytics, and partner closely with Data Science, Data Platform, and Business Technology teams.</p>
<p>Here’s what you’ll do day-to-day:</p>
<ul>
<li>Establish relationships with internal stakeholders to determine business needs, scoping and delivering solutions through data products</li>
<li>Design, build, and maintain data pipelines and dashboards to automate “keep the lights on” work out of manually-managed processes</li>
<li>Build a data foundation by incorporating business logic into raw data tables, laying the groundwork to enable value-add insights</li>
<li>Create a dynamic reporting environment based on stakeholder needs</li>
<li>Identify data discrepancies and set best practices for data logic to create a clean and well-documented environment</li>
<li>Collaborate with broader data organization as steward to the definition and process management of core Gusto metrics</li>
</ul>
<p>Here’s what we&#39;re looking for:</p>
<ul>
<li>Education or work experience in Engineering or Computer Science, or a related technical field</li>
<li>7+ years of experience in an analytics engineering, business intelligence, or technical data analytics role</li>
<li>Experience with SQL and ETL optimization techniques, especially within cloud-based data warehouses like Redshift, Snowflake, etc.</li>
<li>Command-line experience and familiarity with version control collaboration tools (git)</li>
<li>Experience with data pipeline management technologies with dependency checking, such as Airflow, as well as schema design and data modeling tools (dbt)</li>
<li>Experience with data visualization technologies, e.g., Tableau, Looker, Sigma, Mode, Hex</li>
<li>Experience with Python and data ingestion tools</li>
<li>Ability to problem-solve open-ended problems and project manage dependencies and timelines to optimal outcomes</li>
<li>Ability to demonstrate tools, business intuition, and attention to detail for data validation and QA</li>
</ul>
<p>Our cash compensation amount for this role is targeted at $138,000-156,000 in Denver &amp; most remote locations, and $168,000-189,000 in New York &amp; San Francisco Bay Area. Stock equity is additional. Final offer amounts are determined by multiple factors including candidate experience and expertise and may vary from the amounts listed above.</p>
<p>Gusto has physical office spaces in Denver, San Francisco, and New York City. Employees who are based in those locations will be expected to work from the office on designated days approximately 2-3 days per week (or more depending on role). The same office expectations apply to all Symmetry roles, Gusto&#39;s subsidiary, whose physical office is in Scottsdale. Note: The San Francisco office expectations encompass both the San Francisco and San Jose metro areas.</p>
<p>When approved to work from a location other than a Gusto office, a secure, reliable, and consistent internet connection is required. This includes non-office days for hybrid employees.</p>
<p>Our customers come from all walks of life and so do we. We hire great people from a wide variety of backgrounds, not just because it&#39;s the right thing to do, but because it makes our company stronger. If you share our values and our enthusiasm for small businesses, you will find a home at Gusto.</p>
<p>Gusto is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic.</p>
<p>Gusto considers qualified applicants with criminal histories, consistent with applicable federal, state and local law. Gusto is also committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures.</p>
<p>We want to see our candidates perform to the best of their ability. If you require a medical or religious accommodation at any time throughout your candidate journey, please fill out this form and a member of our team will get in touch with you.</p>
<p>Gusto takes security and protection of your personal information very seriously. Please review our Fraudulent Activity Disclaimer. Personal information collected and processed as part of your Gusto application will be subject to Gusto&#39;s Applicant Privacy Notice.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Targeted at $138,000-156,000 in Denver &amp; most remote locations, and $168,000-189,000 in New York &amp; San Francisco Bay Area</Salaryrange>
      <Skills>SQL, ETL optimization techniques, cloud-based data warehouses, Redshift, Snowflake, command-line experience, version control collaboration tools, git, data pipeline management technologies, dependency checking, Airflow, schema design and data modeling tools, dbt, data visualization technologies, Tableau, Looker, Sigma, Mode, Hex, Python, data ingestion tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gusto</Employername>
      <Employerlogo>https://logos.yubhub.co/gusto.com.png</Employerlogo>
      <Employerdescription>Gusto handles payroll, health insurance, 401(k)s, and HR for small businesses nationwide, supporting over 500,000 companies.</Employerdescription>
      <Employerwebsite>https://www.gusto.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gusto/jobs/7557049?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Denver, CO;San Francisco, CA;New York, NY;Las Vegas, NV;Atlanta, GA;Phoenix, AZ</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>2738c24d-82c</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences. Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better.  The volume and velocity of data at Intercom are exploding, fueled by our growth and our drive to integrate more sophisticated, AI-assisted data solutions. The Data Engineering team is the critical engine powering Intercom&#39;s future. We are responsible for building and maintaining the distributed foundations that transform raw information into actionable intelligence, empowering all Intercom teams, from Product to Research.  We are looking for a seasoned Senior Data Engineering Manager to take ownership of key data initiatives and drive these efforts forward.  This role is about impact and ownership. You will lead a team at the forefront of designing and evolving the core infrastructure that powers our entire data ecosystem.  - Next-Gen Platform Evolution: Partner with product and business teams and lead the architectural design and implementation of the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications. - Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build the self-service tooling and infrastructure they need to move fast and deploy safely. - Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment (where dozens of changes can ship daily). - Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.  To put things in context, here are examples of the strategic, company-shaping initiatives you will be expected to own and drive as a Senior Data Engineering Manager:  - GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business focused internal software. - Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible, users should be able to query information,getting both raw numbers and contextual narratives instantly,without needing a data science degree or waiting on analysts. - Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure and to ensure it can scale exponentially for future growth and advanced use cases.  Recent Wins You&#39;ll Build Upon:  - AI-assisted Local Analytics Development Environment for Airflow and DBT. - Data-rich AI apps containerized on Snowflake SPCS. - A new, modern data catalog solution - Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.  You are a leader, a builder, and a problem-solver who thrives on solving real-world business problems. MessageLookupI&#39;m happy to continue the response.  {   &quot;description&quot;: &quot;You are a leader, a builder, and a problem-solver who thrives on solving real-world business problems.  The Essentials:  - 7+ Years Experience: You have a proven, full-time career history in the data space, leading teams of 6+ Engineers. - Stakeholder Focus: You can communicate complex technical solutions to a business-focused audience and vice versa. You are comfortable interacting with stakeholders across the entire breadth of the business. - Technical Depth: Your team will be responsible for the majority of execution, but you&#39;re not afraid to get your hands dirty and write code when it&#39;s needed. You lead from the front. - A Leader &amp; Mentor: You naturally recognize opportunities to step back and mentor others, understanding when your guidance will multiply the team&#39;s output.  Bonus Points (Our Modern Stack Knowledge):  - Airflow at Scale: Extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment. - Modern Data Stack Fluency: Familiarity with tools like Snowflake and DBT. - Future-Focused: You keep a keen eye on industry trends and emerging technologies, always thinking about what&#39;s next.  Next Steps  If you are passionate about designing resilient analytics infrastructure that scales with a high-growth, global product, we encourage you to apply!  ## Benefits   We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!   - Competitive salary and equity in a fast-growing start-up - We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen - Regular compensation reviews - we reward great work! - Pension scheme &amp; match up to 4% - Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents - Open vacation policy and flexible holidays so you can take time off when you need it - Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones - If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too  - MacBooks are our standard, but we also offer Windows for certain roles when needed.   #LI-Hybrid  Policies   Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home. We expect employees to be in the office at least three days per week.  We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone. As an organization, our policy is to not advocate on behalf of the company or our employees on any social or political topics out of our internal or external communications. We respect personal opinion and expression on these topics on personal social platforms on personal time, and do not challenge or confront anyone for their views on non-work-related topics. Our goal is to focus on doing incredible work to achieve our goals and unite the company through our core values.&quot; }</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Snowflake, DBT, Apache Airflow, Data Engineering, Data Science, Data Analysis, Data Visualization, SQL, Python, Cloud Computing, Big Data, Machine Learning, Data Mining, Data Warehousing, ETL, Data Governance, Data Security, Data Architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>57d2ec6f-6de</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>JOB TITLE: Senior Analytics Engineer LOCATION: London, England DEPARTMENT: AI Group</p>
<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences. Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better.</p>
<p>Fin can also be combined with our Helpdesk to become a complete solution called the Intercom Customer Service Suite, which provides AI enhanced support for the more complex or high touch queries that require a human agent.</p>
<p>Founded in 2011 and trusted by nearly 30,000 global businesses, Intercom is setting the new standard for customer service.</p>
<p>Driven by our core values, we push boundaries, build with speed and intensity, and consistently deliver incredible value to our customers.</p>
<p><strong>Senior Analytics Engineer</strong></p>
<p>Communication has changed for people. It’s changed for businesses, too.</p>
<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences. Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better.</p>
<p>Fin can also be combined with our Helpdesk to become a complete solution called the Intercom Customer Service Suite, which provides AI enhanced support for the more complex or high touch queries that require a human agent.</p>
<p>Founded in 2011 and trusted by nearly 30,000 global businesses, Intercom is setting the new standard for customer service.</p>
<p>Driven by our core values, we push boundaries, build with speed and intensity, and consistently deliver incredible value to our customers.</p>
<p><strong>What is the opportunity?</strong></p>
<p>Intercom’s AI Group is responsible for defining new ML features, researching appropriate algorithms and technologies, and rapidly getting first prototypes in our customers’ hands.</p>
<p>We are extremely product-focussed. Our team of 50+ ML scientists, ML engineers, designers and researchers works in partnership with other teams across the whole company.</p>
<p>We move to production fast, often shipping to beta within weeks of a successful offline test.</p>
<p>We constantly run experiments and measure the success of our AI features.</p>
<p>We use frequentist and Bayesian approaches.</p>
<p>We create dashboards to track results.</p>
<p>We dive deep into exactly how users are being successful, and have to tease out all sorts of complex user interactions.</p>
<p>We have to deal with inherently stochastic AI products, and complex effects.</p>
<p>We are looking for a dedicated Analytics Engineer to join the AI Group to help us with that.</p>
<p><strong>What will I be doing?</strong></p>
<ul>
<li>Data Platform Development: Design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform.</li>
</ul>
<ul>
<li>Cross-functional Collaboration: Partner with ai analysts, ml scientists, engineers and business teams to understand data needs and ensure accurate, reliable &amp; ergonomic data solutions.</li>
</ul>
<ul>
<li>Data Strategy &amp; Governance: Lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows.</li>
</ul>
<ul>
<li>Advanced Analytics &amp; Insights: Conducted data analysis and built custom models to support strategic business decisions and performance measurement.</li>
</ul>
<ul>
<li>Automation &amp; Optimization: Streamline data collection and reporting processes to reduce manual effort and improve efficiency.</li>
</ul>
<ul>
<li>Innovation in Data Infrastructure: Created scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs.</li>
</ul>
<ul>
<li>Strategic Influence: Worked with partner teams to align data collection with long-term analytics and feature development goals.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>You write advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code</li>
</ul>
<ul>
<li>You’re familiar with the modern data stack. dbt and Snowflake experience are a big plus.</li>
</ul>
<ul>
<li>A growth mindset and eagerness to learn.</li>
</ul>
<ul>
<li>You exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs</li>
</ul>
<ul>
<li>You practice excellent communication skills, and you tailor explanations of technical concepts to a variety of audiences</li>
</ul>
<p><strong>Nice to haves</strong></p>
<ul>
<li>Exposure to Apache Airflow or other DAG frameworks , we use Airflow to orchestrate and schedule all of our data workflows and transformations</li>
</ul>
<ul>
<li>Worked in Tableau, Looker, or similar visualization/business intelligence platform</li>
</ul>
<ul>
<li>Experience with operational tools and business systems such Google Analytics, Marketo, Salesforce, Segment, or Stripe</li>
</ul>
<ul>
<li>Familiarity with Python</li>
</ul>
<p>#LI-Hybrid</p>
<p>Policies</p>
<p>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</p>
<p>We expect employees to be in the office at least three days per week.</p>
<p>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</p>
<p>As an organisation, our policy is to not advocate on behalf of the company or our employees on any social or political topics out of our internal or external communications.</p>
<p>We respect personal opinion and expression on these topics on personal social platforms on personal time, and do not challenge or confront anyone for their views on non-work related topics.</p>
<p>Our goal is to focus on doing incredible work to achieve our goals and unite the company through our core values.</p>
<p>Intercom values diversity and is committed to a policy of Equal Employment Opportunity.</p>
<p>Intercom will not discriminate against an applicant or employee on the basis of race, colour, religion, creed, national origin, ancestry, sex, gender, age, physical or mental disability, veteran or military status, genetic information, sexual orientation, gender identity, gender expression, marital status, or any other legally recognised protected basis under federal, state, or local law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, dbt, Snowflake, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7808050?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>f152af80-ba0</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Job Title: Senior Data Engineering Manager</p>
<p>Location: London, England</p>
<p>Department: R&amp;D</p>
<p>Job Description:</p>
<p>Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences.</p>
<p>Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better. Fin can also be combined with our Helpdesk to become a complete solution called the Intercom Customer Service Suite, which provides AI enhanced support for the more complex or high touch queries that require a human agent.</p>
<p>Founded in 2011 and trusted by nearly 30,000 global businesses, Intercom is setting the new standard for customer service. Driven by our core values, we push boundaries, build with speed and intensity, and consistently deliver incredible value to our customers.</p>
<p><strong>What&#39;s the opportunity?</strong></p>
<p>The volume and velocity of data at Intercom are exploding, fueled by our growth and our drive to integrate more sophisticated, AI-assisted data solutions.</p>
<p>The Data Engineering team is the critical engine powering Intercom&#39;s future. We are responsible for building and maintaining the distributed foundations that transform raw information into actionable intelligence, empowering all Intercom teams, from Product to Research. We are looking for a seasoned Senior Data Engineering Manager to take ownership of key data initiatives and drive these efforts forward.</p>
<p><strong>What will I be doing?</strong></p>
<p>This role is about impact and ownership. You will lead a team at the forefront of designing and evolving the core infrastructure that powers our entire data ecosystem.</p>
<p><strong>Next-Gen Platform Evolution:</strong></p>
<p>Partner with product and business teams and lead the architectural design and implementation of the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</p>
<p><strong>Enablement Through Tooling:</strong></p>
<p>Partner closely with Analytics Engineers, Analysts, and Data Scientists to build the self-service tooling and infrastructure they need to move fast and deploy safely.</p>
<p><strong>Data Quality Guardianship:</strong></p>
<p>Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment (where dozens of changes can ship daily).</p>
<p><strong>Driving Automation:</strong></p>
<p>Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</p>
<p>The Strategic Impact You&#39;ll Drive</p>
<p>To put things in context, here are examples of the strategic, company-shaping initiatives you will be expected to own and drive as a Senior Data Engineering Manager:</p>
<p><strong>GTM Data Platform Strategy:</strong></p>
<p>Build the data acquisition strategy that will enable us to build the next generation of business focused internal software.</p>
<p><strong>Conversational BI Strategy:</strong></p>
<p>Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible, users should be able to query information,getting both raw numbers and contextual narratives instantly,without needing a data science degree or waiting on analysts.</p>
<p><strong>Platform &amp; Warehousing Strategy:</strong></p>
<p>Lead the architectural- and cost review and revamp of our core data infrastructure and to ensure it can scale exponentially for future growth and advanced use cases.</p>
<p>Recent Wins You&#39;ll Build Upon:</p>
<p><strong>AI-assisted Local Analytics Development Environment for Airflow and DBT.</strong></p>
<p><strong>Data-rich AI apps containerized on Snowflake SPCS.</strong></p>
<p><strong>A new, modern data catalog solution</strong></p>
<p><strong>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</strong></p>
<p><strong>Who you are:</strong></p>
<p>You are a leader, a builder, and a problem-solver who thrives on solving real world business problems.</p>
<p>The Essentials:</p>
<p><strong>7+ Years Experience:</strong></p>
<p>You have a proven, full-time career history in the data space, leading teams of 6+ Engineers.</p>
<p><strong>Stakeholder Focus:</strong></p>
<p>You can communicate complex technical solutions to a business focused audience and vice versa. You are comfortable interacting with stakeholders across the entire breadth of the business.</p>
<p><strong>Technical Depth:</strong></p>
<p>Your team will be responsible for the majority of execution, but you’re not afraid to get your hands dirty and write code when it’s needed. You lead from the front.</p>
<p><strong>A Leader &amp; Mentor:</strong></p>
<p>You naturally recognize opportunities to step back and mentor others, understanding when your guidance will multiply the team&#39;s output.</p>
<p>Bonus Points (Our Modern Stack Knowledge):</p>
<p><strong>Airflow at Scale:</strong></p>
<p>Extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</p>
<p><strong>Modern Data Stack Fluency:</strong></p>
<p>Familiarity with tools like Snowflake and DBT.</p>
<p><strong>Future-Focused:</strong></p>
<p>You keep a keen eye on industry trends and emerging technologies, always thinking about what&#39;s next.</p>
<p>Next Steps</p>
<p>If you are passionate about designing resilient analytics infrastructure that scales with a high-growth, global product, we encourage you to apply!</p>
<p><strong>Benefits</strong></p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p><strong>Competitive salary and equity in a fast-growing start-up</strong></p>
<p><strong>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</strong></p>
<p><strong>Regular compensation reviews - we reward great work!</strong></p>
<p><strong>Pension scheme &amp; match up to 4%</strong></p>
<p><strong>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</strong></p>
<p><strong>Open vacation policy and flexible holidays so you can take time off when you need it</strong></p>
<p><strong>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</strong></p>
<p><strong>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</strong></p>
<p><strong>MacBooks are our standard, but we also offer Windows for certain roles when needed.</strong></p>
<p>#LI-Hybrid</p>
<p>Policies</p>
<p>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home. We expect employees to be in the office at least three days per week.</p>
<p>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone. As an organization, our policy is to not advocate on behalf of the company or our employees on any social or political topics out of our internal or external communications. We respect personal opinion and expression on these topics on personal social platforms on personal time, and do not challenge or confront anyone for their views on non-work related topics. Our goal is to focus on doing incredible work to achieve our goals and unite the company through our core values.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Quality, Automation, Cloud Computing, Big Data, Machine Learning, Data Visualization, Data Analysis, Data Mining, Data Warehousing, Data Governance, Data Security, Data Compliance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company founded in 2011, trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574783?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>678fcec8-2b0</externalid>
      <Title>Senior Media Data Analyst (Knowledge in MMM)</Title>
      <Description><![CDATA[<p>This role will focus heavily on developing the reporting and measurement functions for our Media stakeholders. The ideal candidate will be responsible for transforming data in Snowflake using dbt, building reporting in PowerBI, and analyzing outputs using tools like R or python. The ideal candidate will be comfortable interpreting results of analyses to stakeholders across a wide range of seniority, from entry-level to C-suite stakeholders. They will be able to clearly communicate actionable insights that will improve marketing efficiency.</p>
<p>The Media Measurement &amp; Analytics team supports the Farmers media teams in executing and optimizing media budgets by providing analytical rigor to investment decisions. The team executes incrementality tests, builds measurement models, and reports on performance.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing reporting and measurement functions for Media stakeholders</li>
<li>Transforming data in Snowflake using dbt</li>
<li>Building reporting in PowerBI</li>
<li>Analyzing outputs using tools like R or python</li>
<li>Interpreting results of analyses to stakeholders</li>
<li>Communicating actionable insights to improve marketing efficiency</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Fluency in English</li>
<li>Advanced knowledge of Media Measurement (MMM, causal inference, attribution)</li>
<li>Advanced knowledge of Analytics</li>
<li>Advanced knowledge of R/python</li>
<li>Intermediate knowledge of SQL</li>
<li>Intermediate knowledge of PowerBI</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health and Dental Insurance</li>
<li>Pension Plan</li>
<li>Meals tickets</li>
<li>Life Insurance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Media Measurement, Analytics, R, Python, SQL, PowerBI, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global consulting and technology services company with nearly 350,000 employees across over 50 countries.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/i78hWatn3n7QurcP1zeuF2/hybrid-fbs---senior-media-data-analyst-(knowledge-in-mmm)-in-s%C3%A3o-paulo-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>São Paulo</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>5cfbb361-cd7</externalid>
      <Title>Principal Analytics Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Principal Analytics Engineer to lead the design and build of our AI-powered intelligence system. As a Principal Analytics Engineer, you will synthesize complex data streams into a unified, high-fidelity system that serves as the &#39;source of truth&#39; for the entire customer journey. You will engineer a structured knowledge layer that enables us to scale Go-To-Market (GTM) efforts in a world where data must be optimized for human reporting, predictive science, and conversational AI alike.</p>
<p>Your responsibilities will include: Architecting the foundation of our marketing intelligence system using BigQuery and dbt infrastructure Enabling AI &amp; Agents by developing the semantic layer and structured knowledge base Mapping the customer journey by integrating disparate signals across digital, product, and sales Scaling through partnerships by partnering with Enterprise, Product, Sales, and Finance teams</p>
<p>You will bring: Deep experience with BigQuery, dbt, and semantic layers Proven ability to apply automation or LLM-assisted workflows to the data modeling lifecycle Ability to build complex, interconnected systems by starting with the desired outcome and working backward Collaborative communication skills Operational excellence &amp; governance skills</p>
<p>Bonus points for: GTM fluency Marketing science foundations Identity resolution AI production scaling</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$159,800-$252,800 USD</Salaryrange>
      <Skills>BigQuery, dbt, semantic layers, data modeling, automation, LLM-assisted workflows, collaborative communication, operational excellence &amp; governance, GTM fluency, marketing science foundations, identity resolution, AI production scaling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a platform for search, security, and observability. Its platform is used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7830298?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>7db53eed-402</externalid>
      <Title>Principal Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Principal Analytics Engineer to lead the design and build of our AI-powered intelligence system. This role involves synthesizing complex data streams into a unified, high-fidelity system that serves as the &#39;source of truth&#39; for the entire customer journey. You will engineer a structured knowledge layer that enables us to scale Go-To-Market (GTM) efforts in a world where data must be optimized for human reporting, predictive science, and conversational AI alike.</p>
<p>Key responsibilities include:</p>
<p>Architecting the foundation: designing and building the core BigQuery and dbt infrastructure that powers our marketing intelligence, transforming raw signals into high-fidelity, agent-ready data products.</p>
<p>Enabling AI &amp; Agents: developing the semantic layer and structured knowledge base that allows AI agents to accurately &#39;talk&#39; to our business data and reason through complex performance questions.</p>
<p>Mapping the journey: integrating disparate signals across digital, product, and sales into a unified lifecycle model that tracks the customer&#39;s path from discovery to revenue.</p>
<p>Scaling through partnerships: partnering with Enterprise, Product, Sales, and Finance teams to align on shared metrics while mentoring other engineers to uphold high standards for our data foundation.</p>
<p>Requirements include:</p>
<p>Deep experience with BigQuery, dbt, and semantic layers.</p>
<p>Ability to build complex, interconnected systems by starting with the desired outcome and working backward.</p>
<p>Systems &amp; Design Thinking: the ability to look at a complex web of data and see the underlying architecture required to make it simple and extensible.</p>
<p>Collaborative Communication: a track record of &#39;translating&#39; technical debt into business value and coaching peers through complex architectural hurdles.</p>
<p>Operational Excellence &amp; Governance: treating data as infrastructure and having deep experience implementing data contracts, automated quality monitoring (DQM), and governance frameworks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,000-$243,600 CAD</Salaryrange>
      <Skills>BigQuery, dbt, semantic layers, data modeling, data governance, data architecture, GTM fluency, marketing science foundations, identity resolution, AI production scaling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a platform for search, security, and observability. Its products are used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7851240?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>e84def6f-b65</externalid>
      <Title>Enterprise Sales Director (PNW)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>
<p>We’re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter. At our core, we believe in empowering data practitioners:</p>
<ul>
<li>Reliable, high-quality data is the fuel that propels AI-powered data engineering.</li>
<li>AI is changing data work, fast. dbt’s data control plane keeps data engineers ahead of that curve.</li>
</ul>
<ul>
<li>We empower engineers to deliver reliable, governed data faster, cheaper, and at scale.</li>
</ul>
<p>dbt Labs is now synonymous with analytics engineering, defining the modern data stack and serving as the data control plane for enterprise teams around the world. And we’re just getting started..</p>
<p>We’re growing fast and building a team of passionate, curious people across the globe. Learn more about what makes us special by checking out our values.</p>
<p>Location: US Remote (PNW)</p>
<p><strong>About the Role</strong></p>
<p>We’re looking to hire an Enterprise Sales Director to join the Revenue Team. This person will be responsible for building out our enterprise customer base throughout the Pacific Northwest. The ideal person will be a proactive and curious member of our growing Sales team, identifying new business with prospects and growth opportunities for clients. A certain level of foresight and knowledge working with intrinsic sales cycles will take this individual confidently into the future of dbt Labs.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Own the full sales cycle from lead to ongoing utilization for commercial prospects</li>
</ul>
<ul>
<li>Organize POC implementations of dbt Cloud Enterprise</li>
</ul>
<ul>
<li>Lead and contribute to team projects that develop our sales process</li>
</ul>
<ul>
<li>Work with product to build and maintain the dbt Cloud commercial roadmap</li>
</ul>
<ul>
<li>Become an expert in SQL, dbt, and commercial data operations</li>
</ul>
<ul>
<li>Be an active member of the dbt open source community</li>
</ul>
<p><strong>What You’ll Need</strong></p>
<ul>
<li>4+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
</ul>
<ul>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
</ul>
<ul>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
</ul>
<ul>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
</ul>
<ul>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
</ul>
<ul>
<li>A passion for being an inclusive teammate and involved member of the community</li>
</ul>
<ul>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p><strong>What Will Make You Stand Out</strong></p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
</ul>
<ul>
<li>Knowledge of or prior experience with dbt</li>
</ul>
<p><strong>Remote Hiring Process</strong></p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with the Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews; Sales &amp; Solutions Architects</li>
</ul>
<ul>
<li>Pipeline Generation Activity with Sales Leadership</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p><strong>Compensation</strong></p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process. In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<p>Sales Director OTE Range</p>
<p>$260,000-$330,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$260,000-$330,000 USD</Salaryrange>
      <Skills>SQL, dbt, commercial data operations, technology sales, complex technical concepts, consultative sale, verbal communication, written communication, in-person communication</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they&apos;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4687984005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>46f577e7-522</externalid>
      <Title>Staff Data Analyst</Title>
      <Description><![CDATA[<p>Honor Technology&#39;s mission is to change the way society cares for older adults. As a leader in aging care innovation, Honor provides the technology, tools, and services that empower older adults to live life on their own terms.</p>
<p>We&#39;re looking for a Staff Data Analyst to join our team. This role reports to the VP of Data and joins a team of five other analysts collaborating closely with stakeholders across the entire organization. We&#39;re looking for someone who is excited to jump into new problems and make an impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Solve problems that have real-world impact</li>
<li>Thrive in diverse, cross-functional environments, collaborating with partners across design, product, engineering, and operations</li>
<li>Live at the intersection of software and the real world, whether that&#39;s optimizing complex operational problems or tracing the lineage of a key metric through a dozen transformations</li>
<li>Share knowledge, mentor others, and contribute to a healthy, inclusive team culture</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of professional analytics experience with a track record of owning analytics systems and solving applied business problems</li>
<li>Strong stakeholder management: you are comfortable translating business needs into concrete requirements and communicate tradeoffs clearly</li>
<li>Excellent written and verbal communication skills</li>
<li>Deeply experienced with our analytics stack (Git, Fivetran, Redshift, DBT, Looker) or equivalent tools (and a desire to learn new ones!)</li>
<li>Passion about using your data intuition to navigate a sea of messy data, generate hypotheses, and implement solutions that directly impact the business</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Base pay is just a part of our total rewards program</li>
<li>Honor offers generous equity packages that increase with position level and responsibilities</li>
<li>A 401K with up to a 4% employer match</li>
<li>Medical, dental, and vision coverage including zero-cost plans for employees</li>
<li>Short-term disability, long-term disability, and life insurance are fully employer-paid with a voluntary additional life insurance option</li>
<li>A generous time-off program, mental health benefits, wellness program, and discount program</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$148,500-$165,000 USD</Salaryrange>
      <Skills>Git, Fivetran, Redshift, DBT, Looker, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults to live life on their own terms. It has a global franchise network and over 100,000 Care Pros.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8451598002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b699c631-37e</externalid>
      <Title>FBS Data Engineer-ETL (Informatica)</Title>
      <Description><![CDATA[<p>We are looking for a skilled Data Engineer to design, build, and maintain data pipelines that support analytics and business intelligence initiatives. This role involves both enhancing existing pipelines and developing new ones to integrate data from diverse internal and external sources.</p>
<p>The ideal candidate will have advanced SQL and Informatica skills, experience in ETL development, and a foundational understanding of dimensional data modeling. Experience with DBT is a plus.</p>
<p>Key responsibilities include designing, developing, and maintaining data pipelines and ETL workflows, enhancing and optimising existing data pipelines, building new data ingestion pipelines, and using Informatica to develop and manage ETL processes.</p>
<p>The successful candidate will have a bachelor&#39;s degree in Computer Science, Information Systems, or a related field, and 2-4 years of hands-on experience in data engineering or ETL development using Informatica.</p>
<p>They will also have advanced-level proficiency in writing, optimising, and troubleshooting SQL queries, intermediate experience building and managing pipelines using ETL platforms, and at least 3 years using Informatica for data integration tasks.</p>
<p>Excellent problem-solving and communication skills, with the ability to collaborate across teams, are essential for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Informatica, ETL development, dimensional data modeling, DBT, cloud data platforms, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>One of the United States&apos; largest insurers, providing a wide range of insurance and financial services products with gross written premiums well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/pD4BtxSbTed3C7zp5tL7cF/remote-fbs-data-engineer-etl-(informatica)-in-brazil-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>074e149b-249</externalid>
      <Title>Senior Data Analyst (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>At Holidu, data isn&#39;t just a support function , it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>
<p>This role is on-site in Munich, with two to three office days per week.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3)</li>
<li>Data Pipelines: Airflow, dbt</li>
<li>Data Visualisation: Looker</li>
<li>Data Analytics: SQL, Python</li>
<li>Collaboration: Git, Jira, Confluence, Slack</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>You&#39;ll be part of Holidu&#39;s Business Intelligence department, sitting within one of Holidu&#39;s core analytics teams , a function at the intersection of data, strategy, and real business impact. You&#39;ll collaborate cross-functionally with data engineers, data scientists, and a broader analytics team.</li>
<li>Engage with stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management), providing insights that influence product strategy, internal operations, and revenue growth.</li>
<li>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</li>
<li>Do real, high-quality analytical work , diving deep into the data, building solutions, and raising the bar for what good analysis looks like at Holidu.</li>
<li>Contribute to shaping the future of analytics at Holidu by sharing knowledge, supporting colleagues, and bringing a continuous improvement mindset to how the team works.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>4+ years of data analytics experience.</li>
<li>A collaborative mindset with clear experience communicating findings to senior stakeholders and decision makers.</li>
<li>A mission-driven, working-backwards mentality , starting from customer and business needs, navigating multi-stakeholder contexts, and translating goals into analytical solutions and actionable insights.</li>
<li>Excellent analytical and technical skills: strong SQL, Python (or similar), data visualisation, and the ability to develop technical frameworks that serve a clear business need.</li>
<li>A genuine commitment to AI enablement , you actively use tools like Claude Code or similar to enhance your own coding, planning, and workflows, and you&#39;re excited to bring others along. AI fluency is not a nice-to-have here, it&#39;s part of how we work and how we evaluate impact.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p>Need a sneak peek? Check out the adventure that awaits you on Instagram @lifeatholidu and dive straight into the world of Tech at Holidu for more insights!</p>
<p><strong>Want to travel with us?</strong></p>
<p>Apply online on our careers page! Your first travel contact will be Katharina from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS Stack, Airflow, dbt, Looker, SQL, Python, Git, Jira, Confluence, Slack</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2611548?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b0e8bc32-5f8</externalid>
      <Title>Data Engineer, Associate</Title>
      <Description><![CDATA[<p>The Analytics and Automation team within the EMEA Core COO organisation leverages technology, data, and AI to deliver management information and analytics that drive actionable insights into sales performance and client engagement across the EMEA client businesses. The team plays a critical role in shaping how BlackRock sells to and services its clients, enabling better decision-making through the effective use of data.</p>
<p>The team partners closely with Technology and Engineering teams to design and deliver high-impact data and visualisation tools for COO and Distribution stakeholders. You will also collaborate with internal technology teams on infrastructure, tools, processes, standards, and development practices, as well as work alongside data science and analytics teams across the firm.</p>
<p>The successful candidate will bring a strong passion for technology, data, and client outcomes, with comfort working across a broad range of technical capabilities, including databases, software development, and cloud infrastructure. This role suits someone who enjoys solving complex problems and building scalable, high-impact data products.</p>
<p>At BlackRock, we value curiosity, continuous learning, and professional growth. With over $14 trillion in assets under management, we have a unique responsibility: our products and technology empower millions of investors to save for retirement, pay for education, purchase homes, and improve their long-term financial wellbeing.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Explore, profile, cleanse, and preprocess data to ensure high-quality datasets for analytics, reporting, and downstream consumption.</li>
<li>Design and manage workflows for storing and retrieving vectorised documents to support AI-enabled use cases.</li>
<li>Apply embedding models to build AI-driven solutions.</li>
<li>Leverage modern AI and machine-learning techniques, including large language models (LLMs) and agent-based systems, to enhance data workflows and automation.</li>
<li>Design, build, and maintain scalable ELT pipelines in Snowflake, covering data ingestion, transformation, and publication layers for enterprise use.</li>
<li>Develop and optimise Snowflake data models (schemas, views, and curated datasets) to enable consistent, performant, and well-governed access.</li>
<li>Implement robust data quality controls, including validation, reconciliation, monitoring, and alerting, to ensure the accuracy and reliability of critical datasets.</li>
<li>Partner with central platform and data engineering teams to support Snowflake architecture, including performance tuning, warehouse optimisation, security patterns, and cost-effective usage.</li>
<li>Write high-quality, maintainable code that is well-tested, documented, and aligned with engineering best practices, including version control and peer review.</li>
<li>Build and maintain Streamlit applications to enable self-service data exploration, operational tooling, and lightweight analytics for business users, including applications that interact directly with Snowflake datasets and stored procedures.</li>
<li>Translate business questions into technical solutions, delivering clear outputs and actionable insights for both technical and non-technical stakeholders.</li>
</ul>
<p>Skills and Competencies:</p>
<ul>
<li>Strong experience with Snowflake and advanced SQL, including query optimisation and best-practice analytical data modelling.</li>
<li>Knowledge of modern AI and machine-learning techniques, including large language models (LLMs) and agent-based systems, embedding modes and document vectorization.</li>
<li>Experience developing and maintaining data transformation workflows using dbt within Snowflake, including modular modelling, testing, and documentation.</li>
<li>Proficiency in Python for data engineering and application development, including data processing, orchestration patterns, and reusable components.</li>
<li>Experience building Streamlit applications, ideally in an enterprise environment, with a focus on usability and integration with Snowflake-backed data products.</li>
<li>Familiarity with modern data engineering practices, including ELT/ETL patterns, incremental processing, scheduling, observability, and automated testing.</li>
<li>Strong problems-solving mindset, with the ability to work independently, manage ambiguity, and drive continuous improvement.</li>
<li>Strong communication skills, with the ability to articulate technical concepts and insights to non-technical stakeholders.</li>
<li>Fluency in English, both written and spoken.</li>
</ul>
<p>Experience and Qualifications:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Engineering, Statistics, or a related quantitative discipline.</li>
<li>Proven experience in data engineering, analytics engineering, or a closely related technical role, ideally within a cloud-based data platform environment.</li>
<li>Experience working with commercial, sales, or distribution datasets is an advantage.</li>
<li>3–5 years of relevant experience in data engineering, or a related field within a multinational or complex organisational environment.</li>
</ul>
<p>Our benefits:</p>
<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>
<p>Our hybrid work model:</p>
<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock:</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children&#39;s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p>This mission would not be possible without our smartest investment – the one we make in our employees. It&#39;s why we&#39;re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Snowflake, advanced SQL, query optimisation, best-practice analytical data modelling, modern AI and machine-learning techniques, large language models (LLMs), agent-based systems, embedding modes, document vectorization, dbt, Python, data engineering, application development, data processing, orchestration patterns, reusable components, Streamlit, usability, integration, ELT/ETL patterns, incremental processing, scheduling, observability, automated testing</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a multinational investment management corporation with over $14 trillion in assets under management.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/7qBV8qezqAyWXYSoCvFizs/data-engineer%2C-associate-in-budapest-at-blackrock?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4c1ec49b-16d</externalid>
      <Title>Systems Analyst</Title>
      <Description><![CDATA[<p>We&#39;re looking for a RevOps Systems Analyst to help build a modern, AI-powered revenue operations stack. This isn&#39;t a traditional Salesforce admin role. We&#39;re building a GTM tech stack where AI is a first-class citizen, automations are genuinely intelligent, and our CRM is a living system that reflects how our business actually works.</p>
<p>You&#39;ll function as a Salesforce admin and backend developer in one. You&#39;ll own platform administration, build complex automations via Flows and Apex, and develop and maintain custom services and integrations, including Salesforce MCP servers and AI tooling that connect our CRM to the broader systems our team runs on. For larger engineering efforts, you&#39;ll work shoulder-to-shoulder with our engineering team, scoping and shipping together.</p>
<p>In this role, you will:</p>
<p>Administer our Salesforce org: object model, data architecture, user configuration, and platform health.</p>
<p>Build and maintain automations via Salesforce Flows, Apex, and custom services that power our GTM processes end to end.</p>
<p>Develop and maintain Salesforce MCP servers and AI tooling that extend CRM functionality into AI-powered workflows.</p>
<p>Design and maintain SQL-based data pipelines that feed dashboards and revenue reporting across the org.</p>
<p>Translate ambiguous business problems into clean technical scope, from stakeholder conversation to implemented solution.</p>
<p>Requirements:</p>
<p>Salesforce administration experience with deep familiarity across the object model, Flows, data loading tools, and platform configuration.</p>
<p>Strong Apex development skills: you can build, debug, and maintain custom logic, triggers, and integrations without leaning on a separate engineering team.</p>
<p>SQL proficiency: you can decompose ambiguous questions into answerable queries and are comfortable with complex joins, aggregations, and pipeline logic.</p>
<p>Systems mindset: you can hold an end-to-end process in your head, trace a field from its origin through every downstream dependency, and anticipate what breaks when something changes.</p>
<p>Excited by modern tooling: you&#39;re drawn to building AI-native systems and are curious about where Salesforce, MCP, and AI agents intersect.</p>
<p>Bonus:</p>
<p>Familiarity with dbt or modern data transformation tooling.</p>
<p>Experience with BI tools such as Sigma or Looker.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce administration, Apex development, SQL, Salesforce Flows, Custom services, Salesforce MCP servers, AI tooling, dbt, modern data transformation tooling, BI tools, Sigma, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ElevenLabs</Employername>
      <Employerlogo>https://logos.yubhub.co/elevenlabs.io.png</Employerlogo>
      <Employerdescription>ElevenLabs is an AI research and product company transforming how we interact with technology. It has raised $781M in funding and its last valuation was $11B.</Employerdescription>
      <Employerwebsite>https://elevenlabs.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://elevenlabs.io/careers/b92059ea-d71f-4eef-a184-8510265752fb/systems-analyst?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9faf3487-9d2</externalid>
      <Title>Data Analytics/Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>We are seeking passionate and talented Data/Analytics Engineers to join our team. In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions. Work closely with machine learning teams to support model training, deployment pipelines, and feature stores.</li>
</ul>
<ul>
<li>Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.</li>
</ul>
<ul>
<li>Define and enforce data governance, metadata management, and data lineage standards.</li>
</ul>
<ul>
<li>Ensure data integrity, security, and compliance with industry standards.</li>
</ul>
<p>About You</p>
<ul>
<li>Master’s degree in Computer Science, Engineering, Statistics, or a related field.</li>
</ul>
<ul>
<li>3+ years of experience in data engineering, analytics engineering, or a related role.</li>
</ul>
<ul>
<li>Proficiency in Python and SQL.</li>
</ul>
<ul>
<li>Experience with dbt.</li>
</ul>
<ul>
<li>Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with attention to detail.</li>
</ul>
<ul>
<li>Ability to communicate complex data concepts to both technical and non-technical stakeholders.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience with machine learning pipelines, MLOps, and feature engineering.</li>
</ul>
<ul>
<li>Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Background in building self-service data platforms for analytics and AI use cases.</li>
</ul>
<p>Hiring Process</p>
<ul>
<li>Intro call with Recruiter - 30 min</li>
</ul>
<ul>
<li>Hiring Manager Interview - 30 min</li>
</ul>
<ul>
<li>Technical interview - Live Coding (Python/SQL) - 45 min</li>
</ul>
<ul>
<li>Technical interview - System Design - 45 min</li>
</ul>
<ul>
<li>Value talk interview - 30 mins</li>
</ul>
<ul>
<li>References</li>
</ul>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Health insurance</li>
</ul>
<ul>
<li>Transportation allowance</li>
</ul>
<ul>
<li>Sport allowance</li>
</ul>
<ul>
<li>Meal vouchers</li>
</ul>
<ul>
<li>Private pension plan</li>
</ul>
<ul>
<li>Generous parental leave policy</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and equity package</Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse, Machine learning pipelines, MLOps, Feature engineering, Containerization, Orchestration, DevOps, CI/CD pipelines, Infrastructure-as-code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that designs and develops high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions. The company&apos;s comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>ef31cad0-254</externalid>
      <Title>Infrastructure Engineer (Data &amp; Automations)</Title>
      <Description><![CDATA[<p>We are looking for an Infrastructure Engineer (Data &amp; Automations) to join our Core Platform team. As ElevenLabs scales, the systems and tooling needed to support our teams have grown significantly. As part of the Core Platform team, you will own the infrastructure that enables every team at ElevenLabs move fast, safely and at scale - from the data pipelines that help our internal stakeholders understand what&#39;s happening in production, to the automations and agents that enable our non-engineering teams to scale non-linearly.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning the infrastructure underpinning our Data and Automations teams - setting up internal services, building and maintaining ETLs, and connecting systems with one another.</li>
<li>Taking end-to-end ownership of platform reliability and security, with a particular focus on improving security across our internal systems.</li>
<li>Collaborating closely with the Infrastructure team to bridge platform needs with infra capabilities.</li>
<li>Partnering with Growth, Finance and other internal teams to ensure they have the data and tooling they need.</li>
</ul>
<p>You will be working with a range of technologies including cloud infrastructure, container orchestration, deployment systems, and security fundamentals. We are looking for someone with strong background in infrastructure engineering, software engineering fundamentals, and experience with cloud infrastructure, container orchestration, deployment systems, and security fundamentals.</p>
<p>In return, you will have the opportunity to work with a talented team of engineers and researchers, and contribute to the development of cutting-edge AI technology. You will also have access to a range of benefits, including a competitive salary, flexible working hours, and opportunities for professional development.</p>
<p>If you are interested in this opportunity, please submit your application, including your resume and a cover letter explaining why you are a good fit for this role. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud infrastructure, container orchestration, deployment systems, security fundamentals, Python, Kubernetes, DBT, CI/CD systems, AI agents, developer experience tooling, basics of how AI models work</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ElevenLabs</Employername>
      <Employerlogo>https://logos.yubhub.co/elevenlabs.io.png</Employerlogo>
      <Employerdescription>ElevenLabs is an AI research and product company transforming how we interact with technology. It has raised $781M in funding and its last valuation was $11B.</Employerdescription>
      <Employerwebsite>https://elevenlabs.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://elevenlabs.io/careers/01d0899b-0e40-4af2-a859-5d21962666b1/infrastructure-engineer-data-automations?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>11ec86c6-270</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>We are looking for a highly skilled Senior Data Engineer to become part of our core Data &amp; AI Engineering team. In this pivotal role, you will be responsible for designing and expanding enterprise-level data infrastructure that enables ZoomInfo&#39;s internal teams to interact with data comprehensively,extracting, exploring, analyzing, and generating insights,through various platforms using ZI&#39;s internal chat agent</p>
<p>The ideal candidate has a strong background in big data processing, pipeline orchestration, and data modeling, with a proven track record of delivering scalable and high-quality data solutions in fast-paced, data-centric product environments. Given the dynamic nature of emerging technologies, this role requires an individual who excels at exploration and embraces continuous learning as core responsibilities.</p>
<p>You&#39;ll constantly research and implement innovative solutions while integrating vast, diverse data sources into our AI applications, including our industry-leading LLM-powered systems</p>
<ul>
<li>Design, develop, and maintain high-performance, product-centric data pipelines using Airflow, DBT, and Python.</li>
<li>Architect and optimize the massive-scale data warehouse and lakehouse that serves as our single source of truth for all customer data, primarily using Snowflake.</li>
<li>Lead the integration of diverse structured and unstructured data sources (e.g., web data, third-party APIs) into our data ecosystem, ensuring high-quality and reliable ingestion.</li>
<li>Implement and enforce Model Context Protocol (MCP) or similar architectures to feed accurate and contextual data into our LLM-powered products for applications like Retrieval Augmented Generation (RAG) and advanced search.</li>
<li>Collaborate with ML engineers, data scientists, and product managers to translate business needs into scalable data solutions that directly enhance customer value.</li>
<li>Define, monitor, and enforce data quality SLAs across all pipelines and products, ensuring data accuracy and lineage are a top priority.</li>
<li>Mentor and coach junior engineers, promoting best practices in code quality, data architecture, and operational excellence.</li>
<li>Participate in architectural decisions and long-term strategy planning for our enterprise-wide data infrastructure, with a focus on cost, performance, and reliability.</li>
</ul>
<ul>
<li>Expert-level SQL for building performant, scalable queries and transformations on massive datasets.</li>
<li>Strong Python programming skills with a focus on distributed computing, data manipulation, and building robust APIs.</li>
<li>Production-level experience for large-scale batch and streaming data processing.</li>
<li>Hands-on experience with DBT (Data Build Tool) for advanced data modeling and transformations in a modern data stack.</li>
<li>Deep knowledge of Snowflake data warehouse design, optimization, and cost modeling.</li>
<li>Experience implementing Model Context Protocol (MCP) or similar architectures to feed structured and unstructured data into LLM-powered systems.</li>
<li>Strong understanding of data architecture concepts, including data lakes, event-driven architectures (e.g., Kafka), ETL/ELT, and data mesh.</li>
<li>Proficiency with cloud platforms (GCP and/or AWS) and infrastructure as code (e.g., Terraform).</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Familiarity with LLMOps, LangChain, or RAG (Retrieval Augmented Generation) pipelines.</li>
<li>Experience with building embedding models or pipelines for Named Entity Recognition (NER).</li>
<li>Knowledge of data cataloging tools (e.g., OpenLIneage, etc.) and lineage tracking.</li>
<li>Familiarity with other distributed systems and databases (e.g., DynamoDB, Flink).</li>
</ul>
<p>Required Non-Technical Skills:</p>
<ul>
<li>Excellent communication skills – ability to explain complex technical concepts to both engineering teams and non-technical stakeholders.</li>
<li>Strategic &amp; Product-Oriented Thinking – can translate business objectives and customer needs into scalable, high-impact data solutions.</li>
<li>Leadership &amp; Mentorship – experience guiding and uplifting engineering teams to achieve their full potential.</li>
<li>Stakeholder Management – able to collaborate effectively across departments (Product, Engineering, Sales, Compliance).</li>
<li>Agility &amp; Adaptability – thrives in ambiguous, evolving environments and can rapidly prototype and iterate on solutions.</li>
<li>Strong documentation habits and ability to evangelize best practices across the organization.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
<li>8+ years of progressive experience in data engineering, with a track record of leadership and impact.</li>
<li>Demonstrated experience in implementing or scaling data infrastructure for a data-centric product company.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>SQL, Python, Airflow, DBT, Snowflake, Model Context Protocol, LLM-powered systems, data architecture, cloud platforms, infrastructure as code, LLMOps, LangChain, RAG, Named Entity Recognition, data cataloging tools, lineage tracking, distributed systems, databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a NASDAQ-listed company that provides a Go-To-Market Intelligence Platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8509474002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c0c30c21-9ae</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>You&#39;ll own Gamma&#39;s data infrastructure and architecture as we scale to hundreds of millions of users and petabytes of data. This means defining the technical strategy for our end-to-end event pipeline architecture, designing distributed systems that handle massive scale with reliability, and establishing the foundation for how data flows through Gamma.</p>
<p>As a Staff Data Engineer, you&#39;ll balance hands-on engineering with technical leadership. You&#39;ll architect solutions for orders of magnitude growth, mentor engineers across the organization, and drive strategic decisions about our data stack. You&#39;ll work closely with analytics, product, and engineering leadership to enable data-driven decision making at scale while building systems that serve millions of users and inform critical business decisions.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own and evolve our end-to-end event pipeline architecture, from Kafka ingestion through Snowflake analytics, setting technical direction for data infrastructure</li>
<li>Design and architect distributed data systems that scale to orders of magnitude more data volume while maintaining world-class query performance</li>
<li>Lead initiatives to build and optimize CDC (change data capture) pipelines and streaming data transformations at massive scale</li>
<li>Establish best practices for data quality, pipeline reliability, and system observability across the organization</li>
<li>Drive strategic technical decisions about data modeling, infrastructure architecture, and technology choices</li>
<li>Mentor engineers and elevate data engineering practices across analytics, product, and engineering teams</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years as a data or software engineer with deep expertise in distributed systems, data infrastructure, and high-growth SaaS products at massive scale</li>
<li>Expert-level knowledge of Apache Kafka (producers, consumers, Kafka Connect, stream processing) and event streaming platforms</li>
<li>Extensive hands-on experience with Snowflake, including performance optimization, cost management, and data modeling; strong foundation in Postgres, CDC patterns, and replication strategies</li>
<li>Proven track record architecting and leading major data infrastructure initiatives through orders-of-magnitude growth</li>
<li>Experience establishing best practices and driving technical strategy across organizations</li>
<li>Strong communication skills with a history of influencing technical direction across engineering, analytics, and leadership</li>
<li>Proficiency with dbt, Terraform, and working knowledge of data governance, privacy compliance (GDPR, CCPA), and security best practices</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K - $310K</Salaryrange>
      <Skills>Apache Kafka, Snowflake, Postgres, dbt, Terraform, data governance, privacy compliance, security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a technology company that provides data infrastructure and architecture for hundreds of millions of users and petabytes of data.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/4b2c97d1-b12b-46b7-9e24-1fcd248e28a3?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7ec15bc8-951</externalid>
      <Title>GTM Engineer</Title>
      <Description><![CDATA[<p>You&#39;ll build the AI-native GTM systems and data infrastructure that turn product usage signals into enterprise sales opportunities. Gamma&#39;s PLG flywheel generates enormous engagement data across millions of users. Your job is to create the systems that identify which accounts should talk to sales, when they&#39;re ready, and why.</p>
<p>This is a 0-to-1 role at the intersection of data, product, and revenue. You&#39;ll build Product Qualified Lead identification systems, design AI-powered lead scoring models, and implement data pipelines that give sales and customer success real-time visibility into engagement and expansion signals. You&#39;ll partner with Product and Data teams to instrument tracking, ensure data quality, and continuously improve how we identify and convert high-intent accounts.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build Product Qualified Lead (PQL) identification systems that surface enterprise buying signals based on team expansion, engagement, feature adoption, and company attributes</li>
<li>Build AI agents for automated account research using LLM APIs to analyze company websites, news, funding events, and tech stacks, generating personalized talking points for sales</li>
<li>Design and implement data pipelines from product usage data to HubSpot, enabling sales and CS teams to see real-time engagement, usage trends, and expansion signals</li>
<li>Create AI-powered lead scoring models combining product behavior, firmographics, and engagement patterns to predict conversion likelihood</li>
<li>Build dashboards and reporting that give sales, CS, and leadership visibility into account health, product adoption, expansion opportunities, and churn risk</li>
<li>Implement reverse ETL infrastructure using tools like Census, Hightouch, or custom solutions to ensure product data flows seamlessly into GTM systems</li>
</ul>
<p><strong>What you&#39;ll bring</strong></p>
<ul>
<li>3–5 years of experience in a GTM Engineer, Growth Engineer, Revenue Ops, or Analytics Engineering role at a PLG B2B SaaS company</li>
<li>Strong technical foundation in Python and SQL with experience building data pipelines, ETL/reverse ETL workflows, and integrating product data with GTM systems like HubSpot or Salesforce</li>
<li>API integration expertise with experience building workflows using tools like n8n, Zapier, Make, or Tray.io</li>
<li>Deep understanding of PLG metrics with the ability to operationalize activation, engagement, and expansion signals, and a track record of building systems (PQL models, AI agents, predictive analytics) that drove measurable pipeline or revenue</li>
<li>Scrappy builder mindset with the judgment to balance custom builds versus off-the-shelf tools, ideally with experience helping build early data systems fueling a PLG-to-enterprise transition</li>
<li>Data warehouse experience (Snowflake, BigQuery, Redshift) and familiarity with dbt or similar transformation tools (Nice to have)</li>
<li>Production machine learning experience building, deploying, and monitoring predictive models (Nice to have)</li>
</ul>
<p><strong>Compensation range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $170K - $215K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170K - $215K</Salaryrange>
      <Skills>Python, SQL, data pipelines, ETL/reverse ETL workflows, API integration, n8n, Zapier, Make, Tray.io, PLG metrics, predictive analytics, dbt, Snowflake, BigQuery, Redshift, production machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a PLG B2B SaaS company.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/e068135f-9816-4e5d-bd93-6464f314c57a?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>01d26760-398</externalid>
      <Title>Lead Data Scientist</Title>
      <Description><![CDATA[<p>You&#39;ll be Gamma&#39;s first Data Scientist, turning massive volumes of user data and AI outputs into insights that shape how millions of people create content. With over 1 million AI-generated presentations and 5 million AI images created daily, the signal is enormous. Your job is to find the patterns, measure what matters, and help us ship better features faster.</p>
<p>You&#39;ll design A/B tests that measure product impact, build frameworks that reveal how our AI models perform across user segments, and investigate the hard questions: what makes a good AI-generated presentation, and why does a feature land differently for enterprise versus consumer users? You&#39;ll partner closely with product, engineering, and design to define quality metrics, uncover edge cases, and guide decisions with data. This is an IC role for someone eager to be hands-on, but it can grow into a leadership position establishing the Data function at Gamma.</p>
<p>You&#39;ll thrive here if you&#39;re curious, comfortable with ambiguity, and excited to figure out what questions to ask rather than just answering the ones given to you.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build frameworks that help the team understand AI model performance, user behavior, and product health across Gamma&#39;s platform</li>
<li>Dig into AI model outputs across user cohorts to identify quality gaps and create evals and metrics to measure improvement</li>
<li>Partner with engineering and product to define quality metrics for AI-generated content and user satisfaction</li>
<li>Develop statistical models and frameworks that empower product teams to make data-informed decisions independently</li>
<li>Design and analyze large-scale A/B tests and experiments with statistical rigor to measure product impact and guide prioritization</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180K - $310K</Salaryrange>
      <Skills>statistical foundations, A/B testing, experiments, large-scale data, metrics frameworks, dbt, Snowflake, unstructured data, text data, AI/ML products, LLMs, generative AI, model performance, production settings</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a technology company that enables users to create content using AI-generated presentations and images.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/a7043f8c-fd04-46fb-a5d9-e4b8845ba10d?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>37545f84-f6c</externalid>
      <Title>AI Analytics Engineer (Business Analytics)</Title>
      <Description><![CDATA[<p>Join Airtable as an Analytics Engineer supporting our Strategic Finance and Accounting teams, where you&#39;ll take an AI-first approach to building the next generation of analytics infrastructure and tooling. This is a unique opportunity to shape how Finance stakeholders access and leverage data, driving real-time, self-serve insights and accelerating the adoption of AI-native analytics across Airtable.</p>
<p>In this role, you&#39;ll own the canonical financial data models and analytics infrastructure that power critical metrics like ACV, ARR, billings, and revenue. You&#39;ll build scalable data pipelines, dashboards, and AI-powered systems that enable Finance stakeholders to explore data independently and make confident decisions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and maintain trusted data models for core financial metrics</li>
<li>Develop and govern dbt data pipelines</li>
<li>Build and optimise dashboards that deliver real-time, self-serve insights</li>
<li>Enable data independence for Finance stakeholders</li>
<li>Collaborate with Finance and data partners to establish the AI Business Context layer</li>
<li>Develop tools that enable natural language access to financial data and AI-assisted reporting</li>
<li>Design and implement AI-powered workflows that automatically surface patterns, anomalies, and meaningful changes in financial data</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Technical curiosity and AI-forward thinking</li>
<li>A builder at heart with a bias towards making things</li>
<li>Analytical grounding with SQL proficiency and experience with modern data tools</li>
<li>Detail-oriented with financial rigor</li>
<li>Clear communication and strong writing skills</li>
<li>Business-minded with a genuine interest in how the business works</li>
<li>Ability to thrive in ambiguity and create clarity and forward momentum</li>
</ul>
<p>Airtable is an equal opportunity employer and welcomes people of different backgrounds, experiences, abilities, and perspectives. Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$157,100-$193,600 USD</Salaryrange>
      <Skills>SQL, dbt, Looker, Omni, Airtable, AI, Machine Learning, Data Engineering, Data Analysis, Python, R, Tableau, Power BI, Data Visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. Over 500,000 organisations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://www.airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8470036002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA; Austin, TX; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b652c548-59f</externalid>
      <Title>AI Analytics Engineer (AI &amp; Analytics Platform)</Title>
      <Description><![CDATA[<p>We&#39;re looking for an AI Analytics Engineer to help define what AI-powered analytics looks like from the ground up. As one of the first hires shaping this discipline, you&#39;ll build the systems that make AI tools accurate, design the workflows that make them trustworthy, and partner across the business to drive adoption.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Build and maintain context infrastructure: Translate institutional business knowledge into structured formats so that AI tools can answer questions accurately.</li>
<li>Design and run evaluation frameworks: Develop predefined test cases, accuracy benchmarks, and validation workflows that measure whether AI-generated insights are trustworthy.</li>
<li>Build and orchestrate AI agent systems: Help design, build, and iterate on the agent architectures that power our analytics tools.</li>
<li>Experiment and evaluate: Test prompt configurations, agent behaviours, and model outputs across different use cases.</li>
<li>Develop internal AI tooling and workflows: Build tools and automations that improve DS&amp;A&#39;s own efficiency.</li>
<li>Build automated insight generation systems: Design and develop AI-powered systems that proactively surface patterns, anomalies, and meaningful changes in business data.</li>
<li>Drive cross-functional adoption: Partner with GTM, Product, Finance, and other teams to onboard users, field questions, triage issues, and train stakeholders on how to get the most out of our AI-powered analytics tools.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Strong SQL proficiency and experience working with modern data tools (dbt, Databricks, Snowflake, or similar).</li>
<li>Clear, structured writing: Can translate complex business logic into documentation that both humans and LLMs can interpret.</li>
<li>Hands-on experience with AI tools (Claude, ChatGPT, Cursor, or similar) beyond casual use.</li>
<li>Cross-functional communication: Can partner with non-technical stakeholders to understand needs, triage issues, and drive adoption.</li>
<li>Builder mindset: Comfortable picking up new technical skills, prototyping solutions, and iterating quickly.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience with BI semantic modeling (Looker, Omni Analytics, or similar).</li>
<li>Familiarity with Python and LLM APIs.</li>
<li>Experience building evaluation or testing frameworks.</li>
<li>Background in context engineering, knowledge management, or technical writing.</li>
<li>Experience with agent architectures, prompt engineering, or AI system design.</li>
<li>Familiarity with data science and ML concepts (e.g., experimentation, time series analysis, statistical modeling, clustering, anomaly detection).</li>
</ul>
<p>Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, dbt, Databricks, Snowflake, AI tools, Claude, ChatGPT, Cursor, cross-functional communication, builder mindset, BI semantic modeling, Python, LLM APIs, evaluation or testing frameworks, context engineering, knowledge management, technical writing, agent architectures, prompt engineering, AI system design, data science, ML concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations relying on it, including 80% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8434287002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA; Austin, TX; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>87749959-700</externalid>
      <Title>Intern Data Engineering (all genders)</Title>
      <Description><![CDATA[<p>Join our Data Engineering team inside the Business Intelligence department, where you&#39;ll work with experienced engineers to build the data foundation that powers Holidu&#39;s growth.</p>
<p>As an intern, you&#39;ll get hands-on experience with real problems and have the opportunity to make a meaningful impact. You&#39;ll work on building and supporting data pipelines, digging into data quality, getting hands-on with cloud infrastructure, and exploring AI-assisted development.</p>
<p>Our team uses a range of technologies, including Redshift, Athena, DuckDB, Terraform, Docker, Jenkins, ELK, Grafana, Looker, OpsGenie, Kafka, Airbyte, and Fivetran. You&#39;ll have the chance to learn from experienced engineers and contribute to the development of our data systems.</p>
<p>In this role, you&#39;ll be part of a team that genuinely loves what they do and is passionate about building a better data foundation for Holidu. You&#39;ll have the opportunity to take responsibility from day one and develop through regular feedback.</p>
<p>We offer a fair salary, the chance to make a difference for hundreds of thousands of monthly users, and the opportunity to grow and develop through regular feedback. You&#39;ll also have access to a range of benefits, including a hybrid work policy, the chance to work from other local offices, and a corporate subscription to Urban Sports Club or a premium gym membership at a discounted rate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>internship</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Git, Airflow, dbt, Docker, Cloud platform (AWS, GCP, etc.), LLM tools, AI-assisted coding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2557398?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad717304-da7</externalid>
      <Title>Intern Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>
<p>This internship provides a great opportunity to gain hands-on experience into Data Analytics. You will work alongside a team of highly skilled and dedicated professionals who are committed to offering strong mentorship and guidance to help you start your career in the field of data.</p>
<p>Duration: 6 months. Location: Munich, 2-3 office days per week.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git, Atlassian.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Analytics Intern at Holidu, you’ll help our company make smarter, data-driven decisions, while being supported by a Senior Analyst.</p>
<p>This role goes beyond building dashboards. We want curious, proactive people who want to become data advisors - not only delivering reports, but understanding the business context, which questions they answer and why they matter.</p>
<ul>
<li>Collect, analyse, and interpret large datasets to help solve real business challenges.</li>
<li>Build dashboards and reports using tools like SQL, Python, and Looker.</li>
<li>Collaborate closely with teams such as Product, Marketing, or Finance to help them extract actionable insights from data.</li>
<li>Build and improve data pipelines using cutting-edge technologies.</li>
<li>We are an AI-first team. Rather than manually executing repetitive tasks, you will use AI to work smarter and automate workflows.</li>
<li>You’ll collaborate with our Data Scientists and get exposure to:</li>
<li>Data preparation and exploratory data analysis.</li>
<li>How ML-models are built, evaluated, and deployed in real-life.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>Currently enrolled in or recently completed a Bachelor’s or Master’s degree in a quantitative field (e.g., Business Analytics, Data Science, Economics, Statistics, Mathematics, Engineering or similar).</li>
<li>Understanding of SQL and Python, proficiency in Excel/Google Sheets and a desire to learn visualization tools like Looker.</li>
<li>Knowledge of Machine Learning and Statistical models is a plus.</li>
<li>Strong analytical and problem-solving skills, and attention to detail.</li>
<li>Curiosity to learn and a passion for solving data problems.</li>
<li>Good communication and presentation skills.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Compensation: Get a fair salary.</li>
<li>Impact: Make a difference for hundreds of thousands of monthly users.</li>
<li>Growth: Take responsibility from day one and develop through regular feedback.</li>
<li>Community: Engage with international, diverse, yet like-minded colleagues through regular events and 2 office days per week with your team.</li>
<li>Flexibility: Benefit from our hybrid work policy and the chance to work from other local offices for up to 8 weeks a year.</li>
<li>Fitness: Get a Urban Sports Club corporate subscription or a premium gym membership at a discounted rate.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>internship</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Looker, Git, Atlassian, Airflow, DBT, AWS Stack, Redshift, Athena, Glue, S3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search and recommendation services for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2556233?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc9213ff-135</externalid>
      <Title>(Senior) Team Lead Marketing Analytics (all genders)</Title>
      <Description><![CDATA[<p>Within the Marketing Technology department, we are building a new Marketing Analytics team and are looking for a Team Lead to shape it from the ground up.</p>
<p>You&#39;ll work closely with a wide range of Marketing stakeholders, ensuring they have the data, tools, and insights they need to drive sustainable growth. Moreover, you will also collaborate with data scientists and data engineers within the department to build best-in-class analytical solutions.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git, Jira, Confluence, Slack.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>You&#39;ll be leading data analysts and collaborating cross-functionally with data engineers and data scientists - fostering collaboration, learning, and analytical excellence.</li>
<li>Engage with senior marketing leadership on strategic projects, providing insights that influence channel strategy and budget decisions, and ultimately our revenue growth.</li>
<li>Translate marketing logic, for a diverse range of channels (e.g. Performance Marketing, SEO, CRM, affiliate) and use cases into analytical requirements and communicate complex findings clearly to both technical and commercial teams.</li>
<li>Support and partner with Marketing Technology on tracking, event design, and data flows to ensure data quality and reliable reporting frameworks.</li>
<li>Not shying away from hands-on work as an individual contributor (50% to start), while leading the team, diving deep into the details when needed.</li>
<li>Shape the future of marketing analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>5+ years multi-channel marketing analytics experience in a B2B or B2C organisation where marketing is a core performance driver, with extensive hands-on expertise in at least one of the following: attribution, cost and revenue allocation, or bidding.</li>
<li>People management experience - this should not be your first leadership role.</li>
<li>A collaborative mindset with clear experience communicating with executive stakeholders and senior decision makers.</li>
<li>You are mission-driven, with a working backwards mentality (i.e. starting with customer needs) and clear experience managing and delivering complex projects with multiple stakeholders. Ability to translate business goals into analytical solutions and break down complex topics into actionable insights.</li>
<li>Excellent analytical and technical skills. Concretely: strong in SQL, Python (or similar), data visualisation skills as well as developing technical frameworks to serve a clear business need.</li>
<li>A strong personal or team focus on AI enablement: you actively use AI tools to enhance your coding, planning, and workflows, and can enable your team to do the same.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS Stack, Airflow, DBT, Looker, SQL, Python, Git, Jira, Confluence, Slack</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that provides a search engine for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2458940?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25fd58ed-3c0</externalid>
      <Title>(Senior) Data Scientist (all genders)</Title>
      <Description><![CDATA[<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>
<p>As a Senior Data Scientist, you will work on various topics such as rankings, recommendations, user segmentation, user lifetime value, business forecasts, etc. You will have access to our huge dataset and work in collaboration with stakeholders from various departments.</p>
<p>Your objective is to build the best internal and external products for our customers. Holidu highly values a diverse and open environment with people from all over the world.</p>
<p>This role is based in Munich with a hybrid setup.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Flexible data science environment (Python, Sagemaker)</li>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>You will play a pivotal role in the Business Intelligence team alongside data scientists, analysts, and engineers. Together, you will lead the development and enhancement of our company-wide machine learning strategy.</p>
<ul>
<li>Collaborate across various business departments to identify opportunities and solve critical business challenges using data science solutions.</li>
<li>Build and optimize predictive models such as booking cancellation forecasts, churn predictions, pricing optimization, revenue forecasting and marketing channel allocation.</li>
<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>
<li>Interface with diverse business stakeholders, ensuring alignment between data science initiatives and company goals.</li>
<li>Demonstrate leadership in data science projects, leveraging your expertise to drive measurable business impact.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>3+ years of experience as a Data Scientist, with a proven track record of applying data science methodologies to solve complex business problems.</li>
<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>
<li>Expertise in statistics, predictive analytics, machine learning techniques, and proficiency in tools like Python and SQL.</li>
<li>Experience with Airflow and dbt is a plus.</li>
<li>Strong understanding of business operations and experience collaborating with diverse stakeholders.</li>
<li>Enthusiasm for data science and a drive to deliver world-class products that make a difference.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Sagemaker, AWS Stack, Airflow, DBT, Looker, SQL, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that provides a search engine for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2555141?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>528bf454-d13</externalid>
      <Title>Data Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to join our team. As a key member of our data organization, you will be responsible for transforming raw data into a strategic asset by designing high-performance data models that power our financial reporting, product forecasting, and GTM strategy.</p>
<p>Your 12-Month Journey</p>
<p>During the first 3 months, you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt), core business data models, and understand the current pain points in our data flow. You will deliver and optimize your first high-priority models for product usage and financial reporting. You will partner with the Data Engineer to align on the new infrastructure roadmap.</p>
<p>Within 6 months, you will implement a robust semantic layer to standardize KPIs across the company and enable AI-readiness and advanced natural language querying.</p>
<p>After 1 year, you will fully own the company&#39;s data modeling architecture, ensuring it is prepared for AI and machine learning applications. You will act as a strategic advisor to department heads, using data to help shape the company&#39;s long-term growth and forecasting strategies.</p>
<p>What You&#39;ll Be Doing</p>
<p>Strategic Data Product Ownership: Manage the end-to-end lifecycle of our internal data products. You will partner with stakeholders to translate complex business questions into technical requirements, selecting the right tools to ensure our reporting is scalable, accessible, and high-impact.</p>
<p>Advanced Analytics Engineering: Design, build, and maintain our core data models using dbt Labs. You will own the logic for mission-critical datasets, including financial reporting, churn forecasting, and reverse-ETL flows that sync warehouse data back into our business tools (e.g., Planhat, HubSpot).</p>
<p>Data Governance &amp; Semantic Layering: Act as the guardian of &#39;The Truth.&#39; You will implement data governance standards and build our semantic layer to ensure metrics are consistent across the company.</p>
<p>Data Democratization &amp; Enablement: In collaboration with RevOps, you will design and deliver training programs and documentation. Your goal is to empower users across Finance, Product, and GTM to independently navigate data products and derive their own insights.</p>
<p>Collaboration: You will be the central hub of our data organization. You will work daily with the Data Engineer to align on the roadmap, while frequently consulting with Finance, GTM, and Product leaders to ensure our data products solve their most pressing problems.</p>
<p>What You Bring</p>
<p>Solid experience in Analytics Engineering, Data Analysis, or Data Engineering, with a track record of independently delivering data products that enable reporting, decision-making, and CDP use cases.</p>
<p>You are an expert in SQL and understand how to write performant, modular code. Familiarity with Python and Git for optimizing and versioning data transformations is a significant advantage.</p>
<p>Deep, hands-on experience with dbt and BigQuery is a must. You should also be comfortable navigating ELT tools like Airbyte or Fivetran.</p>
<p>Commercially savvy: you understand the business. You can spot opportunities where data can improve ARR, reduce churn, or optimize spend.</p>
<p>You thrive in fast-paced environments and are comfortable creating structure out of the uncertainty of a scaling company.</p>
<p>Strong project management and stakeholder management skills. You are a &#39;bilingual&#39; communicator who can discuss warehouse schemas with an engineer and ARR growth with a CFO.</p>
<p>Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer</p>
<p>Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam</p>
<p>A chance to be part of and shape one of the most ambitious scale-ups in Europe</p>
<p>Work in a diverse and multicultural team</p>
<p>€1,500 annual training budget plus internal training</p>
<p>Pension plan, travel reimbursement, and wellness perks</p>
<p>28 paid holiday days + 2 additional days to relax in 2026</p>
<p>Work from anywhere for 4 weeks/year</p>
<p>An inclusive and international work environment with a whole lot of fun thrown in!</p>
<p>Apple MacBook and tools</p>
<p>€200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>SQL, dbt, BigQuery, Airbyte, Python, Git, ELT tools, Data governance, Semantic layering, Data democratization, Enablement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-analytics-engineer?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6bfc6b4-74f</externalid>
      <Title>Senior Data Scientist - Marketing (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers. Together, we build machine learning and analytics products that directly influence GMV, conversion, and retention.</p>
<p>Within the department, we’re building a new Marketing Analytics team and are looking for a Senior Data Scientist to drive its data science initiatives. In this role, you’ll work closely with Analysts, Engineers, and Marketing stakeholders to develop and productionize advanced machine learning, statistical, and predictive models that improve marketing performance and drive measurable company growth.</p>
<p>As a Senior Data Scientist – Marketing, you’ll take strong ownership of data science initiatives that directly shape our marketing strategy and growth. You will:</p>
<p>Partner closely with Marketing, Marketing Analytics, and Marketing Technology to identify opportunities and translate business questions into scalable data science solutions.</p>
<p>Lead the development of high-impact machine learning and statistical models for marketing use cases such as channel allocation, ad bidding, churn prediction, lifetime value, revenue attribution, and business metrics forecasting.</p>
<p>Work end-to-end - from translating business questions into hypotheses to researching, building, validating, and deploying models.</p>
<p>Run experiments and iterate in production: design A/B tests, monitor model performance, and continuously improve based on measured impact.</p>
<p>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</p>
<p>Help define the team&#39;s roadmap and ways of working as a founding member of Marketing Analytics - your input will help shape this function.</p>
<p>Act as a senior role model in the team, sharing best practices and helping raise the bar for data science at Holidu.</p>
<p>We&#39;re looking for someone with 5+ years of experience as a Data Scientist, with clear ownership of projects that delivered measurable business impact. You should have a degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field, and strong expertise in machine learning, statistics, and predictive analytics, with hands-on experience using Python and SQL.</p>
<p>Experience with marketing data science use cases such as attribution modeling, customer lifetime value prediction, churn modeling, or bid optimization is also required. You should have a solid understanding of marketing concepts across channels (e.g. Performance Marketing, SEO, CRM, Affiliate) and how data science can improve them.</p>
<p>Additionally, you should have experience working with modern data stacks, ideally including AWS (Redshift, Athena, S3), Airflow, dbt, and Git. A collaborative mindset paired with great communication skills is essential, as you&#39;ll need to work with diverse stakeholders and explain complex topics in a simple way.</p>
<p>AI proficiency is also a plus, as you&#39;ll be comfortable using AI to enhance coding, planning, and monitoring, and successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</p>
<p>If you&#39;re excited about the opportunity to shape the future of travel with products used by millions of guests and thousands of hosts, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Statistics, Predictive Analytics, Python, SQL, Marketing Data Science, Attribution Modeling, Customer Lifetime Value Prediction, Churn Modeling, Bid Optimization, AI, CI/CD Pipelines, Retraining Workflows, Lineage Tracking, Documentation, Airflow, dbt, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that helps users find and book vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2510157?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80d15de9-aa7</externalid>
      <Title>Senior Data Scientist - Rankings &amp; Recommendations (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers.</p>
<p>You will join a cross-functional Product team, Search Intelligence, which is responsible for optimizing ranking and recommendations for users visiting our website.</p>
<p>You&#39;ll be part of the broader Data Science team, which operates across cross-functional domain teams - giving you access to shared knowledge, best practices, and collaboration opportunities beyond your domain.</p>
<p>You’ll collaborate daily with Data Engineers, Analysts, Product Managers, and Back-end Engineers.</p>
<p>You’ll report to the Team Lead, Data Science.</p>
<p>Together, we turn data into actionable insights and innovative technology that powers how millions of guests find and book their perfect holiday home.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Python • Airflow • dbt • AWS (SageMaker, Redshift, Athena) • MLflow</li>
</ul>
<p>The Ranking challenge at Holidu</p>
<p>Holidu lists over 4 million vacation rental properties. Our ranking and personalization systems determine which of them our 70+ million annual users see, directly impacting search conversion and business results.</p>
<p>What&#39;s live today:</p>
<ul>
<li>Multi-stage ranking pipeline: Reinforcement-learning-based cold ranking, contextual re-ranking, and personalized recommendations.</li>
</ul>
<ul>
<li>Cold-start models for new properties with limited behavioral data.</li>
</ul>
<ul>
<li>Personalized recommendations based on user browsing patterns.</li>
</ul>
<p>Some of the hard problems we&#39;re solving:</p>
<ul>
<li>Multi-objective optimization: Balancing user relevance, conversion probability, and business value.</li>
</ul>
<ul>
<li>Personalization without history: Most users are anonymous or first-time visitors.</li>
</ul>
<ul>
<li>Cold-start: A significant share of our inventory is new each quarter. How do we surface quality properties before we have behavioral data?</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>You&#39;ll shape the ranking and recommendation systems that millions of guests rely on to find their holiday home. With access to extensive datasets and modern ML infrastructure, you&#39;ll work end-to-end - from identifying opportunities and prototyping new approaches to shipping models to production and measuring their impact.</p>
<ul>
<li>Develop high-impact models and improvements for our ranking, recommendation, and personalization systems - with the freedom to explore new, creative approaches.</li>
</ul>
<ul>
<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>
</ul>
<ul>
<li>Design and run A/B tests as a core part of ranking development; success is measured by successful experiments per quarter and time-to-decision.</li>
</ul>
<ul>
<li>Collaborate closely with Product Managers and Software Engineers to identify, prioritize, and ship ranking improvements.</li>
</ul>
<ul>
<li>Ensure model reliability in production, measured by online/offline agreement, model and data drift KPIs, latency and uptime SLAs, and automated monitoring coverage.</li>
</ul>
<ul>
<li>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</li>
</ul>
<ul>
<li>Demonstrate leadership in data science projects by driving technical direction, scoping initiatives, and guiding the team&#39;s prioritization and project execution.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>5+ years of experience as a Data Scientist, with a proven track record of applying ML models to solve real business problems.</li>
</ul>
<ul>
<li>Experience working on ranking models or recommender systems is a strong advantage.</li>
</ul>
<ul>
<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>
</ul>
<ul>
<li>Strong foundations in statistics, predictive modeling, and machine learning techniques, with hands-on experience using Python and SQL.</li>
</ul>
<ul>
<li>Experience with Airflow and dbt is a plus.</li>
</ul>
<ul>
<li>Solid understanding of business operations and the ability to translate data insights into clear, actionable outcomes.</li>
</ul>
<ul>
<li>A collaborative mindset and enthusiasm for using data to build world-class products that make a real impact.</li>
</ul>
<ul>
<li>AI Proficiency: You are comfortable using AI to enhance coding, planning, and monitoring. This includes successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p>Need a sneak peek? Check out the adventure that awaits you on Instagram @lifeatholidu and dive straight into the world of Tech at Holidu for more insights!</p>
<p><strong>Want to travel with us?</strong></p>
<p>Apply online on our careers page! Your first travel contact will be Lucia from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Airflow, dbt, AWS, MLflow, Machine Learning, Statistics, Predictive Modeling, SQL, AI, Data Science, Ranking Models, Recommender Systems, Collaboration, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, listing over 4 million properties and serving 70+ million annual users.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2413808?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>28d86251-85e</externalid>
      <Title>Senior Data Analyst, Marketing Analytics</Title>
      <Description><![CDATA[<p>As a Senior Data Analyst, Marketing Analytics at GitLab, you&#39;ll serve as a strategic analytics partner to Marketing leadership and help shape how we measure, model, and improve marketing performance.</p>
<p>Reporting to the Senior Manager of Marketing Analytics, you&#39;ll take ownership of high-impact analytical work across attribution, inbound funnel analysis, target setting, and campaign measurement.</p>
<p>You&#39;ll work across the full data lifecycle, from building and maintaining dbt models in Snowflake that power business intelligence, to delivering executive-ready insights, to partnering cross-functionally to support important business decisions.</p>
<p>In this role, you&#39;ll use a modern analytics and AI-enabled workflow that includes Snowflake, Claude with MCP connections, and GitLab Duo Agent Platform in git-based team workflows.</p>
<p>This is a high-visibility role for someone with strong SQL skills, sound judgment in B2B software marketing analytics, and the ability to turn complex data into clear stories for senior stakeholders in our all-remote, values-driven environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Serve as a trusted advisor to senior Marketing and Developer Relations leaders, providing data-driven insights that inform marketing strategy, product development, and go-to-market decisions.</li>
</ul>
<ul>
<li>Establish and scale proactive insights across the Marketing function, including experimentation and connecting marketing credit from initial touchpoints to annual recurring revenue.</li>
</ul>
<ul>
<li>Measure the impact of community programs, developer advocacy, and evangelism efforts on awareness, adoption, and pipeline.</li>
</ul>
<ul>
<li>Evaluate the effectiveness of demand generation programs, including trial conversion analysis across segments, lead scoring optimization, and outreach platform performance.</li>
</ul>
<ul>
<li>Partner closely with Analytics Engineering and Data Platform teams to translate business questions into clear technical requirements and support reliable delivery in the analytics stack.</li>
</ul>
<ul>
<li>Help evolve GitLab&#39;s multi-touch attribution approach, including attributed metric definitions and implementation across reporting.</li>
</ul>
<ul>
<li>Contribute to annual planning cycles through partnership with Sales, Finance, and Product leadership.</li>
</ul>
<ul>
<li>Use AI-enabled analytics workflows through GitLab Duo Agent Platform, Claude with MCP integrations, and Glean within GitLab to improve how insights are developed and shared.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience in a data analyst or analytics engineer role, preferably in B2B software marketing analytics or in a large-scale analytics consultancy supporting similar companies.</li>
</ul>
<ul>
<li>Strong proficiency in SQL, including the ability to connect disparate data sources and build a clear view of marketing program performance in a cloud data warehouse, ideally Snowflake.</li>
</ul>
<ul>
<li>Experience working with senior leaders and tailoring data-driven narratives to support decision-making.</li>
</ul>
<ul>
<li>Hands-on experience with dbt for data transformation, testing, and documentation.</li>
</ul>
<ul>
<li>Strong understanding of B2B marketing funnel metrics, including marketing qualified leads, sales accepted opportunities, pipeline, multi-touch attribution, and conversion rate analysis.</li>
</ul>
<ul>
<li>Experience working with marketing automation platforms such as Marketo and customer relationship management systems such as Salesforce, including data integration patterns and sync logic.</li>
</ul>
<ul>
<li>Proficiency with a business intelligence and visualization tool, with Tableau preferred.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, with the ability to distill complex analysis into clear narratives for technical and non-technical audiences.</li>
</ul>
<ul>
<li>Experience with AI-assisted analytics development, including MCP connections and agentic workflows.</li>
</ul>
<ul>
<li>Familiarity with Git workflows for version control, continuous integration and continuous delivery, issue tracking, and merge request workflows.</li>
</ul>
<ul>
<li>Comfort working in a remote, async-first, globally distributed environment.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Marketing Analytics team sits within GitLab&#39;s Enterprise Data organization and partners closely with the broader Marketing team as well as other departments across GitLab.</p>
<p>We build and maintain trusted, scalable data products that help inform strategic decisions for Marketing while also creating alignment with related business metrics, especially across Product and Sales.</p>
<p>You&#39;ll join a fully remote team that collaborates asynchronously across time zones using our shared standards, code review, and documentation to support consistency and quality.</p>
<p>We work on analytically complex, high-impact problems where strong data foundations, clear definitions, and thoughtful cross-functional partnerships are essential to helping stakeholders understand performance and make better decisions.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$94,100-$201,600 USD</Salaryrange>
      <Skills>SQL, dbt, Snowflake, Tableau, Marketo, Salesforce, AI-assisted analytics development, MCP connections, agentic workflows, Git workflows, version control, continuous integration and continuous delivery, issue tracking, merge request workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8472178002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e58b08f7-c31</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>As a Senior Data Engineer on the Analytics Team, you will collaborate with stakeholders across the company to design, build and implement data pipelines and models that enable our next generation of technology to be deployed around the world. You will have a hand in helping shape the data platform vision at Anduril.</p>
<p>We&#39;re looking for software and data engineers who are seeking high impact collaborative roles focused on driving operational execution. Ideally you are looking to learn what it takes to build the next generation of defence technology.</p>
<p>Your responsibilities will include leading the design and roadmap for our data platform, partnering with operations, product, and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs, owning the ingest and egress frameworks for data pipelines that stitch together various data sources in order to produce valuable data products that drive the business, and managing a large user base and providing true data self-service at scale.</p>
<p>We use Palantir Foundry as our central hub for data-driven applications, visualizations and large-scale data analysis across the Anduril org. We also use SQLMesh for data transformations, Athena for querying data, Apache Iceberg as our table format, and Flyte for orchestration.</p>
<p>Required qualifications include 5+ years of experience in a data engineering role building products, ideally in a fast-paced environment, good foundations in Python or another language, experience with Spark, PySpark, SQL and dbt, experience with Enterprise Data Systems like Palantir Foundry, and experience with or interest in learning how to develop data services and data products.</p>
<p>The salary range for this role is $166,000-$220,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Python, Spark, PySpark, SQL, dbt, Palantir Foundry, SQLMesh, Athena, Apache Iceberg, Flyte</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a defence technology company working to solve big problems in defence.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4587312007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477d343e-e37</externalid>
      <Title>Customer Success Architect</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>
<p>About the Customer Success Team:</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role:</p>
<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>
<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>
<p>Responsibilities:</p>
<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>
<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>
<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>
<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>
<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>
<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>
<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>
<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>
<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>
<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>
<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>
<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>
<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>
<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>
<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>
<p>Excellent written, analytical, and communication skills</p>
<p>Strong process and/or project delivery discipline</p>
<p>Eager to learn new technologies and adapt to evolving customer needs</p>
<p>We&#39;d Be Extra Excited For Someone Who Has</p>
<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Familiar with analytics best practices across business segments and verticals</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leading provider of digital analytics software, serving over 29,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7506821?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>03224784-9c2</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Job Title: Senior Data Engineering Manager</p>
<p>Location: Dublin, Ireland</p>
<p>Department: R&amp;D</p>
<p>Job Description:</p>
<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>
</ul>
<ul>
<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>
</ul>
<ul>
<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>
</ul>
<ul>
<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>
</ul>
<p>Strategic Impact You&#39;ll Drive:</p>
<ul>
<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>
</ul>
<ul>
<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>
</ul>
<ul>
<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>
</ul>
<p>Recent Wins You&#39;ll Build Upon:</p>
<ul>
<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>
</ul>
<ul>
<li>Data-rich AI apps containerized on Snowflake SPCS.</li>
</ul>
<ul>
<li>A new, modern data catalog solution.</li>
</ul>
<ul>
<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>
</ul>
<ul>
<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>
</ul>
<ul>
<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>
</ul>
<ul>
<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>
</ul>
<ul>
<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>
</ul>
<p>Bonus Points (Our Modern Stack Knowledge):</p>
<ul>
<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>
</ul>
<ul>
<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>
</ul>
<ul>
<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up.</li>
</ul>
<ul>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>
</ul>
<ul>
<li>Regular compensation reviews - we reward great work!</li>
</ul>
<ul>
<li>Pension scheme &amp; match up to 4%.</li>
</ul>
<ul>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>
</ul>
<ul>
<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>
</ul>
<ul>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>
</ul>
<ul>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>
</ul>
<ul>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p>Policies:</p>
<ul>
<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>
</ul>
<ul>
<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e715f09-34a</externalid>
      <Title>Solutions Architect, Commercial</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>As a Solutions Architect (SA), your role is to help prospects understand the value that dbt Cloud can bring to their organisation. You&#39;ll lead technical discussions with prospective customers, uncover their data challenges, and demonstrate how dbt Cloud can meet their needs through live demos and technical workshops.</p>
<p>Internally, you&#39;ll contribute to building and refining our processes and playbooks, while acting as the voice of the customer to ensure we continue developing products that solve real problems and delight our users.</p>
<p>Responsibilities</p>
<ul>
<li>Spend the majority of your time on pre-sales opportunities to understand customer use cases and identify the value dbt Cloud can bring to their organisation</li>
</ul>
<ul>
<li>Consult with potential customers to uncover their data challenges and highlight how dbt Cloud can alleviate pain points and support their goals</li>
</ul>
<ul>
<li>Own the demonstration of technical and business impact by delivering tailored demos of dbt Cloud and providing expert guidance</li>
</ul>
<ul>
<li>Collaborate closely with Account Executives, building strong, trust-based relationships and offering strategic input throughout the deal process</li>
</ul>
<ul>
<li>Maintain ongoing relationships with technical stakeholders within your accounts</li>
</ul>
<ul>
<li>Partner with internal teams to improve how we work together, represent the voice of the customer in product discussions, and contribute to cross-functional initiatives</li>
</ul>
<ul>
<li>Participate in our knowledge-sharing loop by helping improve team processes, refining assets, and enabling others through collaboration</li>
</ul>
<ul>
<li>Create and deliver external-facing content through live events, blog posts, recorded tutorials, or other educational resources</li>
</ul>
<p>Requirements</p>
<ul>
<li>4+ years of experience as a data practitioner or consultant in data operations or analytics</li>
</ul>
<ul>
<li>Strong technical foundation, with a solid understanding of modern data warehousing architectures, the modern data stack, and proficiency in SQL</li>
</ul>
<ul>
<li>Prior experience with dbt is preferred; bonus points for dbt certification</li>
</ul>
<ul>
<li>Experience working asynchronously as part of a partially remote, distributed team</li>
</ul>
<ul>
<li>High degree of comfort presenting to diverse stakeholders or audiences, ideally in an externally facing role</li>
</ul>
<ul>
<li>Ability to operate effectively in ambiguous, fast-paced environments and think on your feet during customer conversations</li>
</ul>
<ul>
<li>Desire to be a strong team player,both within the SA team as we evolve our ways of working, and in daily collaboration with Account Executives as part of a deal team</li>
</ul>
<ul>
<li>Openness to travel; we value the power of in-person relationship building, both internally and externally, and expect willingness to travel to events as needed</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Basic python competency and advanced SQL knowledge</li>
</ul>
<ul>
<li>Proven experience in a pre-sales tech role</li>
</ul>
<ul>
<li>Experience with traditional ETL tooling</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>OTE Range (Select Locations)</p>
<p>$145,000-$172,000 USD</p>
<p>OTE Range</p>
<p>$142,000-$202,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000-$172,000 USD</Salaryrange>
      <Skills>data operations, analytics, modern data warehousing architectures, modern data stack, SQL, dbt, python, pre-sales tech role, ETL tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, helping data teams transform raw data into reliable, actionable insights.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4671489005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53ee0ef3-c62</externalid>
      <Title>Staff Data Engineer, Analytics Data Engineering</Title>
      <Description><![CDATA[<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>
<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>
<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>
</ul>
<ul>
<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>
</ul>
<ul>
<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>
</ul>
<ul>
<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>
</ul>
<ul>
<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>
</ul>
<ul>
<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>
</ul>
<ul>
<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>
</ul>
<p>Requirements:</p>
<ul>
<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>
</ul>
<ul>
<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>
</ul>
<ul>
<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>
</ul>
<ul>
<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>
</ul>
<ul>
<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>
</ul>
<ul>
<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>
</ul>
<ul>
<li>Experience leading orchestration or platform modernization efforts at scale</li>
</ul>
<ul>
<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>
</ul>
<ul>
<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>
</ul>
<ul>
<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>
</ul>
<p>Compensation:</p>
<p>US Zone 2 $198,900-$269,100 USD</p>
<p>US Zone 3 $176,800-$239,200 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,900-$269,100 USD</Salaryrange>
      <Skills>SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file sharing services.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7595183?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aeba45bc-3e4</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>
<p>Visit mixpanel.com to learn more.</p>
<p>About the Customer Success &amp; Solutions Engineering Team</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customer’s business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer’s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role</p>
<p>Our SEs are inquisitive, nimble, and able to clearly articulate the technical benefits and requirements of Mixpanel to developers and product managers, while also communicating the business value of our product to high-level executives. In your first month, you’ll become a Mixpanel expert,both in features and functionality as well as implementation. You’ll have the opportunity to shadow customer calls and demos with current Sales Engineers and Account Executives while learning to articulate our value proposition. You’ll also be trained on Mixpanel’s internal systems and tools to set you up for success.</p>
<p>Within your first three months, you’ll be directly involved in deal cycles with Commercial Account Executives. You’ll lead the technical qualification for customer use cases and deliver customized demos for prospects. You’ll work directly with leadership at the prospect’s organization to understand business challenges that can be solved through an analytics platform and consult on how Mixpanel can address those challenges to achieve a strong ROI. You’ll also work with the prospect’s business and technical teams to scope and execute proof-of-concept projects to establish Mixpanel’s value,including consulting on data ingestion methods, overall architecture, success criteria, and rollout strategies for analytics tools across an organization.</p>
<p>Responsibilities</p>
<p>Serve as a trusted technical advisor for prospects, providing strategic consultation on data architecture, governance, instrumentation, and business outcomes.</p>
<p>Communicate and consult effectively at all levels of the customer’s organization to earn trust and influence buying decisions.</p>
<p>Bridge the technical-business gap,working with senior stakeholders to define success for proof-of-concepts and ensuring successful execution and outcomes.</p>
<p>Leverage your Mixpanel expertise and technical/consultative skills to impart best practices throughout proof-of-concept projects.</p>
<p>Partner with Account Executives to drive revenue growth, serving as the key technical contact for customers.</p>
<p>Partner with post-sales teams to ensure that pre-sales value propositions translate into tangible post-sales results.</p>
<p>Develop relationships and uncover the needs of key technical stakeholders within your assigned book of business.</p>
<p>Be the “Voice of the Prospect” by collecting feedback from potential Mixpanel customers and sharing it with the Product team.</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>The ability to communicate with stakeholders at all levels,from discussing APIs with developers to organizational efficiency with CIOs.</p>
<p>A demonstrated track record of qualifying and selling technical solutions to executive stakeholders.</p>
<p>6+ years of experience in a Software-as-a-Service Sales Engineering or related role.</p>
<p>Experience in data querying, modeling, and transformation using tools such as SQL, dbt, Python, Business Intelligence platforms, or Product Analytics tools.</p>
<p>Familiarity with databases and cloud data warehouses (e.g., Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks).</p>
<p>A successful record of experience in sales engineering, customer success, client-facing professional services, consulting, or technical project management.</p>
<p>Excellent written, analytical, communication, and presentation skills.</p>
<p>Strong process and project delivery discipline.</p>
<p>The ability to travel.</p>
<p>Fluency in multiple languages; German preferred.</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>
<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, dbt, Python, Business Intelligence platforms, Product Analytics tools, Databases, Cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a software company that provides a digital analytics platform. It has over 29,000 customers and has raised $277M from prominent investors.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7407407?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, UK (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ae9db20-be4</externalid>
      <Title>Technical Instructor</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Technical Instructor with a passion for teaching and working with data to join our training team to develop curriculum and deliver instruction focused on dbt. As a Technical Instructor, you will deliver live, world-class instruction to train and onboard dbt Cloud customers, partners, and GSIs in small groups, large groups, and webinar audiences. You will create an engaging learning environment initially in a remote context and likely in person in the future. You will get learners excited about using dbt Cloud to make an impact at their organization. You will clearly teach and demo new concepts and skills for learners. You will facilitate live co-development sessions where learners apply what they have learned. You will adjust instruction on the fly while focusing on learner outcomes. You will provide critical feedback from your classroom experience to improve curriculum changes. You will become a product expert with dbt in the context of the modern data stack. You will build curriculum independently. You will gather and implement feedback and self-review teaching.</p>
<p>To be successful in this role, you will need a Bachelor&#39;s degree in a related field such as Computer Science, Data Analytics, Education, or similar. You will also need 2-4 years of technical instruction or related experience. You will love teaching and creating those lightbulb moments for learners. You will create learning environments with high levels of engagement. You will be laser-focused on learner and customer outcomes while adjusting instruction on the fly. You will believe teaching is a craft that we can always get better at and actively seek out feedback. You will communicate clearly and concisely with internal and external stakeholders. You will thrive in an environment of cross-collaboration that moves quickly. You will have experience developing curricula and shipping courses fast.</p>
<p>What will make you stand out? You have worked on customer education/training teams and know how training can drive outcomes for customers. You have experience using dbt and/or teaching dbt. You have experience writing analytics code (i.e., Python, R, etc.) in addition to SQL and working with databases. You have experience designing curricula with a focus on backwards design. You have a dbt Fundamentals badge.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$73,000 - $88,200 USD</Salaryrange>
      <Skills>dbt, data analytics, education, curriculum development, instructional design, teaching, learning and development, product expertise, customer education, training, analytics engineering, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4667068005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US East - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f2f48ad-46d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7b478be3-c4b</externalid>
      <Title>Majors Sales Director (Boston)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Majors Sales Director to join our Revenue Team. As a key member of our growing Sales team, you will be responsible for building out our strategic customer base throughout the Northeast region. Your role will involve owning the full sales cycle from lead to ongoing utilization for enterprise prospects, organizing POC implementations of dbt Cloud Enterprise, and winning new dbt Cloud Enterprise customers per year.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the full sales cycle from lead to ongoing utilization for enterprise prospects</li>
</ul>
<ul>
<li>Organizing POC implementations of dbt Cloud Enterprise</li>
</ul>
<ul>
<li>Winning 20 new dbt Cloud Enterprise customers per year (after ramp)</li>
</ul>
<ul>
<li>Leading and contributing to team projects that develop our sales process</li>
</ul>
<ul>
<li>Working with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<ul>
<li>Becoming an expert in SQL, dbt, and enterprise data operations</li>
</ul>
<ul>
<li>Being an active member of the dbt open source community</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>7+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
</ul>
<ul>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
</ul>
<ul>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
</ul>
<ul>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
</ul>
<ul>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
</ul>
<ul>
<li>A passion for being an inclusive teammate and involved member of the community</li>
</ul>
<ul>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>Prior experience in analytics, ETL, BI, and/or open-sourced software is a plus, as well as knowledge of or prior experience with dbt.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$300,000-$380,000 USD</Salaryrange>
      <Skills>SQL, dbt, enterprise data operations, sales, consultative sales, analytics, ETL, BI, open-sourced software</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform that helps data teams transform raw data into reliable, actionable insights, serving over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4632327005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US East - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a17bc01-d7d</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>
<li>Architecting and building the core Context Platform.</li>
<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>
<li>Owning context storage systems (graph, vector, event/time-based).</li>
<li>Building read/write/query APIs used by agents, products, and external apps.</li>
<li>Designing permission-aware, auditable context access.</li>
</ul>
<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>
<p>In this role, you will own:</p>
<ul>
<li>Context schemas and schema evolution strategies.</li>
<li>Storage and data modeling choices.</li>
<li>Platform APIs and interfaces.</li>
<li>Security, identity propagation, and audit foundations.</li>
<li>Long-term scalability and correctness of context data.</li>
</ul>
<p>You will not own:</p>
<ul>
<li>Agent behavior or orchestration logic.</li>
<li>Business rules or governance policy decisions.</li>
<li>Product UI or workflow automation.</li>
</ul>
<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>
<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9be280f4-cbc</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>
<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>
<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>
<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58a44dab-91a</externalid>
      <Title>Partner Solutions Architect - Japan</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>
<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>
<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>
<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>
<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
</ul>
<ul>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
</ul>
<ul>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
</ul>
<ul>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
</ul>
<ul>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
</ul>
<ul>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
</ul>
<ul>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
</ul>
<ul>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
</ul>
<ul>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
</ul>
<ul>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
</ul>
<ul>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<ul>
<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>
</ul>
<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>
<p>What you&#39;ll need:</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
</ul>
<ul>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
</ul>
<ul>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
</ul>
<ul>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
</ul>
<ul>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
</ul>
<ul>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
</ul>
<ul>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
</ul>
<ul>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
</ul>
<ul>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
</ul>
<ul>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
</ul>
<ul>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
</ul>
<ul>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
</ul>
<ul>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
</ul>
<ul>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
</ul>
<ul>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
</ul>
<ul>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
</ul>
<ul>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews</li>
</ul>
<ul>
<li>Demo Round</li>
</ul>
<p>#LI-LA1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Japan - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce7ea64c-436</externalid>
      <Title>Enterprise Sales Director (North Central)</Title>
      <Description><![CDATA[<p>We&#39;re looking for an Enterprise Sales Director to join our Revenue Team. As a key member of our growing Sales team, you will be responsible for building out our enterprise customer base within the North Central region of the US. With a proven track record of exceeding annual targets, you will identify new business with prospects and growth opportunities for clients. Your ability to understand complex technical concepts and develop them into a consultative sale will be essential in engaging stakeholders at all levels of an analytics organization.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the full sales cycle from lead to ongoing utilization for enterprise prospects</li>
<li>Organize POC implementations of dbt Cloud Enterprise</li>
<li>Win 20 new dbt Cloud Enterprise customers per year (after ramp)</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
<li>Become an expert in SQL, dbt, and enterprise data operations</li>
<li>Be an active member of the dbt open source community</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
<li>401k plan with 3% guaranteed company contribution</li>
<li>Comprehensive healthcare coverage</li>
<li>Generous paid parental leave</li>
<li>Flexible stipends for:</li>
<li>Health &amp; Wellness</li>
<li>Home Office Setup</li>
<li>Cell Phone &amp; Internet</li>
<li>Learning &amp; Development</li>
<li>Office Space</li>
</ul>
<p>Compensation:</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process. In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<p>Sales Director OTE Range $238,000-$320,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$238,000-$320,000 USD</Salaryrange>
      <Skills>technology sales, SQL, dbt, enterprise data operations, complex technical concepts, consultative sale, analytics, ETL, BI, open-sourced software</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4651705005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US Central - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90cf972f-2cf</externalid>
      <Title>Senior Data Analyst – Insights &amp; Analytics (Revenue Operations)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Analyst to join the Insights &amp; Analytics team at Elastic. You&#39;ll help shape how our global Revenue teams use data to make smart decisions, plan for growth, and stay focused on what matters.</p>
<p>This role is a mix of strategy, hands-on analysis, and cross-team collaboration. You&#39;ll work closely with Sales, Customer Success, Marketing, Finance, and more-bringing data to life and helping teams see the story behind the numbers.</p>
<p>We work across a wide range of tools and datasets -from dashboards and forecasts to detailed analytical deep dives - helping the business stay focused, aligned, and data-informed.</p>
<p>To support our growth and enable us to scale efficiently, we are seeking an exceptional Senior Data Analyst to drive sales strategy, planning, reporting, and analysis efforts.</p>
<p>In this position, you will play a strategic role in driving data-informed decision-making across Elastic’s Global Revenue Operations organization and broader go-to-market ecosystem.</p>
<p>You will work on high-impact analysis and develop scalable, leadership-level reporting to support sales effectiveness, pipeline optimization, and revenue growth.</p>
<p>You’ll use your strong analytical skills to break down complex business problems and help teams make smarter decisions.</p>
<p>Your insights will shape how we plan, operate, and improve over time.</p>
<p><strong>What You’ll Be Doing</strong></p>
<ul>
<li>Build clean, scalable dashboards and tools using SQL (BigQuery), dbt, and Tableau</li>
</ul>
<ul>
<li>Analyze complex data to answer key business questions-and turn insights into action</li>
</ul>
<ul>
<li>Handle ad hoc asks in Google Sheets, while staying focused on big-picture, long-term impact</li>
</ul>
<ul>
<li>Support senior stakeholders with clear, accurate reporting for exec and board-level needs</li>
</ul>
<ul>
<li>Question assumptions and get to the root of the problem, not just the request</li>
</ul>
<ul>
<li>Validate your work thoroughly and explore data anomalies with curiosity</li>
</ul>
<p><strong>Working Independently, While Staying Connected</strong></p>
<ul>
<li>Take ownership of projects from start to finish - managing your own scope, priorities, and timelines</li>
</ul>
<ul>
<li>Collaborate across time zones and teams (Sales, Field Ops, Data Engineering, and more) to ensure alignment and data consistency across data sources and reporting</li>
</ul>
<ul>
<li>Spot data issues early and partner with the right folks to fix them at the source</li>
</ul>
<ul>
<li>Help keep our reporting consistent and aligned across tools and teams</li>
</ul>
<p><strong>Learning, Growing, and Making an Impact</strong></p>
<ul>
<li>Build real-world experience in Revenue Operations while learning how the business runs</li>
</ul>
<ul>
<li>Lead high-impact projects that shape go-to-market strategy</li>
</ul>
<ul>
<li>Grow your skills in areas like predictive analytics, data architecture, and business planning</li>
</ul>
<ul>
<li>Work directly with senior stakeholders and build strong relationships across the company</li>
</ul>
<ul>
<li>Be part of a team where your ideas and work make a visible difference</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>4+ years of experience in data analytics, BI, or a similar role - ideally in a high-impact, fast-paced environment</li>
</ul>
<ul>
<li>Strong SQL skills (BigQuery preferred); experience with dbt is a plus</li>
</ul>
<ul>
<li>Proficient with data visualization tools like Tableau or Power BI. Experience with predictive analytics is a plus.</li>
</ul>
<ul>
<li>Experience working with Salesforce or similar sales data tools</li>
</ul>
<ul>
<li>Comfortable working in Google Sheets to support quick turnaround requests</li>
</ul>
<ul>
<li>Familiarity with B2B SaaS and a solid understanding of sales or post-sales data</li>
</ul>
<ul>
<li>Experienced in managing complex projects with clarity and focus - you know how to prioritize, follow through, and get unblocked when needed</li>
</ul>
<ul>
<li>Clear, proactive communicator who can explain complex ideas simply and help others make informed decisions</li>
</ul>
<p>You’ll join a remote-friendly, team that values curiosity, clarity, and action to deliver impact to the business. You’ll have room to grow, freedom to explore, and the support you need to do your best work- while learning how data helps shape every part of our business.</p>
<p><strong>Additional Information</strong></p>
<ul>
<li>We Take Care of Our People</li>
</ul>
<p>As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.</p>
<p>Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
</ul>
<ul>
<li>Health coverage for you and your family in many locations</li>
</ul>
<ul>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
</ul>
<ul>
<li>Generous number of vacation days each year</li>
</ul>
<ul>
<li>Increase your impact</li>
</ul>
<p>We match up to $2000 (or local currency equivalent) for financial donations and service</p>
<p>Up to 40 hours each year to use toward volunteer projects you love</p>
<p>Embracing parenthood with minimum of 16 weeks of parental leave</p>
<p>Different people approach problems differently. We need that.</p>
<p>Elastic is an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds.</p>
<p>Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation.</p>
<p>We welcome individuals with disabilities and strive to create an accessible and inclusive experience for all individuals.</p>
<p>To request an accommodation during the application or the recruiting process, please email candidate_accessibility@elastic.co.</p>
<p>We will reply to your request within 24 business hours of submission.</p>
<p>Applicants have rights under Federal Employment Laws, view posters linked below:</p>
<p>Family and Medical Leave Act (FMLA) Poster;</p>
<p>Pay Transparency Nondiscrimination Provision Poster;</p>
<p>Employee Polygraph Protection Act (EPPA) Poster and Know Your Rights (Poster)</p>
<p>Elastic develops and distributes technology and information that is subject to U.S. and other countries’ export controls and licensing requirements for individuals who are located in or are nationals of the following sanctioned countries and regions: Belarus, Cuba, Iran, North Korea, Syria, or Russia, including the Ukrainian territories annexed by Russia (The Crimea region of Ukraine, The Donetsk People&#39;s Republic (DNR), The Luhansk People&#39;s Republic (LNR), Kherson or Zaporizhzhia).</p>
<p>If you are located in or are a national of one of the listed countries or regions, an export license may be required as a condition of your employment in this role.</p>
<p>Please note that national origin and/or nationality do not affect eligibility for employment with Elastic.</p>
<p>Please see here for our Privacy Statement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, BigQuery, dbt, Tableau, data visualization, predictive analytics, data architecture, business planning, Salesforce, Google Sheets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7601880?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e4e06a1c-882</externalid>
      <Title>Senior Data Engineer, Air Dominance &amp; Strike</Title>
      <Description><![CDATA[<p>We&#39;re looking for ambitious, strategic engineers to build and accelerate every step of the way. As a Senior Data Engineer, you will be responsible for setting project impact, prioritization, and timelines. You will build and grow a team of engineers, lead by example, as we grow and scale our business and field operations. You will partner with senior leaders to bring structure, insight, and velocity to Anduril&#39;s highest-priority operational challenges.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing operational apps in a scrappy but effective way, working closely with users to iterate and support outcomes</li>
<li>Building systems and infrastructure that allows applications to mature, roll out, and scale in the field</li>
<li>Working closely with Anduril&#39;s corporate Analytics team to expand our ontologies and generalize our data and workflow applications across the entire company</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years experience in technical operating roles</li>
<li>Proof of delivery - you owned a project end-to-end from ambiguity to delivered, scalable outcome</li>
<li>Leader - You&#39;re skilled in leading through influence and diving into the details yourself but also have experience in building high-performing teams</li>
<li>Strong communicator - Demonstrated aptitude for strategic partnerships and communication with (non-)technical stakeholders</li>
<li>Engineering fundamentals - Computer Science, Engineering, Physics or similar background</li>
<li>You&#39;re deeply intellectually interested in the intersection of analytics and the real, physical, atoms-based, hardware world and are motivated by Anduril&#39;s mission</li>
<li>You&#39;re energized by business impact &amp; a self-starter: you&#39;d rather build an imperfect solution quickly that is used by many people than a perfect solution that collects dust</li>
<li>Coding experience - used Python, SQL, React, TypeScript or equivalent</li>
<li>Data system exposure: dbt, Redshift, Looker/Tableau, Palantir Foundry</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>Python, SQL, React, TypeScript, dbt, Redshift, Looker/Tableau, Palantir Foundry</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology to transform U.S. and allied military capabilities.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5111298007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09a4d1ce-cde</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced Data Engineer to partner with our Data Science and Data Infrastructure teams to own and scale our data pipelines. You&#39;ll also work closely with stakeholders across business teams including sales, marketing, and finance to ensure that the data they need arrives promptly and reliably.</p>
<p>As a Data Engineer at Figma, you will be responsible for building and maintaining scalable data pipelines that connect various cloud data sources. You will develop a deep understanding of Figma&#39;s core data models and optimize data pipelines for scale. You will partner with the Data Science and Data Infrastructure teams to build new foundational data sets that are trusted, well understood, and enable self-service.</p>
<p>You will work with a wide range of cross-functional stakeholders to derive requirements and architect shared datasets; ability to document, simplify and explain complex problems to different types of audiences. You will establish best practices for the development of specialized data sets for analytics and modeling.</p>
<p>We&#39;d love to hear from you if you have:</p>
<ul>
<li>4+ years in a relevant field.</li>
<li>Fluency with both SQL and Python.</li>
<li>Familiarity with Snowflake, dbt, Dagster, and ETL/reverse ETL tools.</li>
<li>Excellent judgment and creative problem-solving skills.</li>
<li>A self-starting mindset along with strong communication and collaboration skills.</li>
</ul>
<p>While not required, it&#39;s an added plus if you also have:</p>
<ul>
<li>Knowledge in data modeling methodologies to design and build robust data architectures for insightful analytics.</li>
<li>Experience with business systems such as Salesforce, Customer IO, Stripe, NetSuite is a big plus.</li>
</ul>
<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$140,000-$348,000 USD</Salaryrange>
      <Skills>SQL, Python, Snowflake, dbt, Dagster, ETL/reverse ETL tools, data modeling methodologies, business systems such as Salesforce, Customer IO, Stripe, NetSuite</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design and collaboration platform that helps teams bring ideas to life. It was founded in 2012 and has grown to become a leading player in the design and collaboration space.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5220003004?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>328a534b-bac</externalid>
      <Title>Customer Sales Director (Austin, TX)</Title>
      <Description><![CDATA[<p>We are looking for a Customer Sales Director to focus on an at-scale strategy to support, retain, and grow a mix of our Commercial and Enterprise customer base. This role is a hybrid-based role in Austin, Texas.</p>
<p>The ideal candidate will have 4+ years of experience in SaaS sales or account management, with a proven track record of exceeding targets. They will be able to build a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts, manage multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts</li>
<li>Managing multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas</li>
<li>Protecting renewals by monitoring account signals, deepening executive alignment, and helping customers realize consistent value</li>
</ul>
<p>The successful candidate will have strong consultative selling skills, engaging effectively with both technical and business audiences. They will be proactive and organized, capable of independently managing a diverse book of business.</p>
<p>Preferred qualifications include prior experience in analytics, ETL, BI, or open-source software, familiarity with dbt (core or Cloud) and the modern data stack, including platforms like Snowflake, BigQuery, Redshift, or Databricks, experience with consumption and/or usage-based pricing structures, and experience with the MEDD(P)ICC sales methodology / Command of the Message.</p>
<p>Benefits include unlimited vacation time, 401k plan with 3% guaranteed company contribution, comprehensive healthcare coverage, generous paid parental leave, flexible stipends for health &amp; wellness, home office setup, cell phone &amp; internet, learning &amp; development, and office space.</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS sales, account management, analytics, ETL, BI, open-source software, dbt, Snowflake, BigQuery, Redshift, Databricks, consumption and/or usage-based pricing structures, MEDD(P)ICC sales methodology / Command of the Message, prior experience in analytics, familiarity with dbt (core or Cloud), experience with consumption and/or usage-based pricing structures</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer in analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4616931005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>015afe59-9fd</externalid>
      <Title>Data Analyst II</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>
<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>
<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you’ll do</p>
<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>
<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>
<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>
<p>Where you’ll work</p>
<p>This role will be based in our New York office.</p>
<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>
<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>
<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>
<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>
<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>
<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>
<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>
<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>
<p>Requirements</p>
<p>3+ years of experience in data analytics or a related role in a professional setting.</p>
<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>
<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>
<p>Experience with Python for data analysis, automation, or scripting.</p>
<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>
<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>
<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>
<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>
<p>Bonus points</p>
<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>
<p>Familiarity with dbt for data modeling and transformation.</p>
<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>
<p>Experience in fintech, financial services, or payments.</p>
<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>
<p>Compensation</p>
<p>The expected salary range for this role is $93,600 - $117,000.</p>
<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$93,600 - $117,000</Salaryrange>
      <Skills>SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463702002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b492f9ba-bb6</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for an Enterprise Account Executive to grow and manage our enterprise customer base in DACH. As a proactive and curious member of our growing Sales team, you will identify new business with prospects and growth opportunities for clients.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Build, manage and close your own pipeline of companies that you believe will benefit from the dbt Cloud offering</li>
<li>Manage, and deepen the dbt Cloud footprint in existing accounts, optimizing our impact on these companies</li>
<li>Engage with technology partners and ecosystem service providers to optimize our impact and reach in the region</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<p>We&#39;re looking for someone who has:</p>
<ul>
<li>Demonstrable ability of building and closing your own pipeline within enterprise accounts</li>
<li>4+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>You have an edge if you have:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, technology sales, consultative sale, communication skills, sales cycles, analytics organization, analytics, ETL, BI, open-sourced software, dbt</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4668374005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Germany- Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e40d534f-76a</externalid>
      <Title>Resident Architect</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>We&#39;re seeking an experienced Resident Architect (RA) with a passion for solving challenging problems with dbt to join our Professional Services team. RAs are billable to dbt Enterprise customers and help achieve our mission to empower data developers to create and disseminate organisational knowledge.</p>
<p>Responsibilities</p>
<ul>
<li>Work on a variety of impactful customer technical projects - inclusive of implementation, troubleshooting configurations, instilling best practices, and solutioning MVPs and long-term solutions to customer-specific requirements</li>
</ul>
<ul>
<li>Consult on architecture and design</li>
</ul>
<ul>
<li>Ensure our most strategic enterprise customers are adopting the product</li>
</ul>
<ul>
<li>Collaborate with other internal customer-facing teams at dbt Labs - Sales, Solution Architects, Training, Support</li>
</ul>
<ul>
<li>Provide critical feedback to dbt Labs product and engineering teams to improve and prioritise customer requests and ensure rapid resolution for engagement-specific issues</li>
</ul>
<ul>
<li>Become a product expert with dbt in the context of the modern data stack (if you aren&#39;t already)</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>4+ years&#39; experience working with technical data tooling, even better if it is in a customer-facing post-sales, technical architect or consulting role</li>
</ul>
<ul>
<li>Deep expertise in at least one data platform (Snowflake, Databricks, BigQuery, Redshift)</li>
</ul>
<ul>
<li>Experience using, deploying, or configuring dbt in an enterprise setting - working with dbt for minimum 1 year</li>
</ul>
<ul>
<li>Proficiency in writing SQL and Python in analytics contexts</li>
</ul>
<ul>
<li>You look forward to building skills in technical areas that support deployment and integration of dbt enterprise solutions to complete customer projects</li>
</ul>
<ul>
<li>Customer focus, embracing one of core values that users are our best advocates</li>
</ul>
<ul>
<li>Strong organisational skills with the ability to manage multiple technical projects simultaneously - including defining scope, tracking timelines, and ensuring deliverables are met</li>
</ul>
<ul>
<li>Clear and concise communicator with the ability to engage internal and external stakeholders, effectively explain complex technical or organisational challenges, and propose thoughtful, iterative solutions</li>
</ul>
<ul>
<li>The ability to thrive in a remote organisation that highly values transparency and cross-collaboration</li>
</ul>
<ul>
<li>Travel approximately 2-4x/year for customer onsite sessions, team offsites, and company events will be expected</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>You have obtained the dbt Analytics Engineering Certification</li>
</ul>
<ul>
<li>You have the ability to advise on dbt enterprise recommendations, and build direction/consensus with the customer to move forward</li>
</ul>
<ul>
<li>Experience with traditional Enterprise ETL tooling (Informatica, Datastage, Talend)</li>
</ul>
<p>Remote Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner</li>
</ul>
<ul>
<li>Hiring Manager Interview</li>
</ul>
<ul>
<li>Technical Task + Presentation</li>
</ul>
<ul>
<li>Team Interview</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is:</li>
</ul>
<p>$114,000 - $137,700</p>
<ul>
<li>The typical starting salary range for this role in the select locations listed is:</li>
</ul>
<p>$126,000 - $153,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,000 - $137,700</Salaryrange>
      <Skills>dbt, data platform, Snowflake, Databricks, BigQuery, Redshift, SQL, Python, analytics engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4627942005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3168d7d3-70b</externalid>
      <Title>Partner Solutions Architect - North America</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>
<p>Responsibilities</p>
<ul>
<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>Pension coverage</li>
<li>Excellent healthcare</li>
<li>Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a software company that provides an analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they have surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Canada - Remote; US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4e6e79bb-e0c</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Scientist to play a key role in Medium&#39;s data science practice, delivering rigorous analysis and predictive modeling that inform product and business decisions.</p>
<p>As a member of Medium’s Machine Learning &amp; Insights team, you’ll partner closely with stakeholders across teams to help deepen our collective understanding of Medium’s members, writers, and business through data.</p>
<p>You&#39;ll work alongside our Principal Scientist, contributing methodological rigor to strategic initiatives while owning end-to-end research and model development in your domain.</p>
<p>This is a unique role for someone with a track record of solving big, ambiguous problems at the intersection of data, product, and business strategy.</p>
<p>You’ll do more than ivory-tower modeling; you’ll help us define what “content quality” looks like, design better experiments, and ship real product changes to users.</p>
<p>If you love both statistical rigor and real-world business impact, this might be the role for you!</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Proactively identify valuable areas of investigation that help deepen our understanding of our members, writers, and overall business.</li>
</ul>
<ul>
<li>Partner with diverse technical and non-technical stakeholders across the company to develop hypotheses and generate actionable insights in their domains.</li>
</ul>
<ul>
<li>Work with executives at Medium, including our CEO, to model and present data insights and findings.</li>
</ul>
<ul>
<li>Build and maintain statistical and predictive models about Medium’s business.</li>
</ul>
<ul>
<li>Run research projects and investigations, small and large, for leadership and cross-functional partners.</li>
</ul>
<ul>
<li>Develop and maintain quantitative models that support forecasting and strategic planning.</li>
</ul>
<ul>
<li>Share knowledge with and mentor engineers and other stakeholders to improve their own analytics capabilities.</li>
</ul>
<ul>
<li>Contribute to the broader data culture and ecosystem at Medium, helping to raise our data fluency as a team.</li>
</ul>
<ul>
<li>Attend Medium’s twice-yearly, in-person offsites (hosted in locations around the U.S.).</li>
</ul>
<p><strong>Skills, Knowledge and Expertise</strong></p>
<ul>
<li>You know your way around data. You have 4-6 years of experience as an in-house data scientist, with a proven track record of driving business impact through data.</li>
</ul>
<ul>
<li>You&#39;re highly proficient in statistical programming with either Python or R, and you’re comfortable writing SQL for analytical queries. (Python skills are strongly preferred. Our team uses Python extensively, and we’ll be expecting candidates to demonstrate Python scripting skills during the interview process.)</li>
</ul>
<ul>
<li>You have a track record of building, validating, and deploying predictive and statistical models that drove measurable business outcomes.</li>
</ul>
<ul>
<li>You&#39;re a strong collaborator with an established history of cross-team and executive-level partnership.</li>
</ul>
<ul>
<li>You care about quality writing, informed readership, and building a sustainable model for creators. Experience applying modeling techniques to problems unique to social platforms, subscription/membership businesses, or publishing is a plus.</li>
</ul>
<ul>
<li>Experience with ML engineering practices, dbt, or data engineering is a plus! We&#39;re a small team, and the folks who do best are those who like to wear many hats.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>In addition to the new skills you&#39;ll pick up, here&#39;s what else you&#39;ll enjoy by working at Medium:</p>
<ul>
<li>Working with a fully distributed team: We’re fully remote and have teammates across the U.S. &amp; France.</li>
</ul>
<ul>
<li>Healthcare benefits covered at 100% for employees and 70% for dependents.</li>
</ul>
<ul>
<li>Generous parental leave policy.</li>
</ul>
<ul>
<li>Mental health support through Talkspace.</li>
</ul>
<ul>
<li>Financial wellness support through Northstar.</li>
</ul>
<ul>
<li>Stipends for co-working, professional development, wifi, and a one-time home office bonus.</li>
</ul>
<ul>
<li>Unlimited PTO and standard company holidays.</li>
</ul>
<ul>
<li>A discounted Medium membership!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data science, statistical programming, Python, R, SQL, predictive modeling, statistical modeling, machine learning, data engineering, ML engineering practices, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Medium</Employername>
      <Employerlogo>https://logos.yubhub.co/medium.com.png</Employerlogo>
      <Employerdescription>Medium is a platform for reading and writing on the internet.</Employerdescription>
      <Employerwebsite>https://medium.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/medium/jobs/4192878009?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a27a43f9-673</externalid>
      <Title>Senior Data Analyst, Enterprise Analytics</Title>
      <Description><![CDATA[<p>As a Senior Data Analyst on GitLab&#39;s Enterprise Analytics team, you&#39;ll support some of GitLab&#39;s most visible go-to-market and executive reporting. You&#39;ll work closely with Sales, Marketing, Revenue Operations, Finance, and analytics partners to deliver company-level reports, go-to-market performance views, and lifecycle reporting that leaders use to run the business.</p>
<p>Working in Snowflake, dbt, Tableau, and VS Code, you&#39;ll turn ambiguous business questions into trusted, well-documented data products that serve as a single source of truth for performance and targets versus actuals. You&#39;ll also help improve our Enterprise Analytics handbook and core data foundations so strategy, processes, and metric definitions are clear and usable in our all-remote, values-driven environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain executive-facing scorecards, go-to-market performance views, and new-customer reporting that connect pipeline, bookings, and product usage signals into targets-versus-actuals tracking by motion.</li>
</ul>
<ul>
<li>Design performant, reusable Tableau Cloud data sources and help shape the underlying dbt models so reporting layers are stable, governed, and aligned to single-source-of-truth patterns.</li>
</ul>
<ul>
<li>Collaborate with Analytics Engineering and Data Engineering to improve dbt models that support reliable, scalable reporting for business stakeholders.</li>
</ul>
<ul>
<li>Document metric logic, data lineage, and Tableau usage patterns in the handbook so stakeholders can understand how data products are built and used.</li>
</ul>
<ul>
<li>Implement and monitor data quality checks and reconciliations across Snowflake, Salesforce, and other go-to-market systems to strengthen trust in company-level reporting.</li>
</ul>
<ul>
<li>Partner with Revenue Operations, Finance, and go-to-market analysts and stakeholders to define questions, align on metric definitions and pacing logic, and deliver dashboards and deep-dive analyses.</li>
</ul>
<ul>
<li>Present insights in a clear, actionable way for both operational and executive audiences so teams can make better decisions faster.</li>
</ul>
<ul>
<li>Share best practices in SQL, data visualization, and go-to-market analytics, including code reviews and pairing support on high-priority dashboards.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong analytical skills with the ability to frame ambiguous business questions, structure analysis, and translate findings into clear recommendations for go-to-market and executive stakeholders.</li>
</ul>
<ul>
<li>Advanced SQL skills working with large, complex data models, preferably in Snowflake, including joining many tables and building reusable queries and views.</li>
</ul>
<ul>
<li>Proven experience building executive-facing dashboards in Tableau or a similar business intelligence tool, including data source design, performance tuning, and visualization best practices.</li>
</ul>
<ul>
<li>Deep understanding of go-to-market concepts such as new-customer and first-order metrics, pipeline, bookings, Net ARR, go-to-market motions, sales segments, and marketing funnel metrics.</li>
</ul>
<ul>
<li>Strong command of GenAI tools for daily use in your work and the judgment to use them effectively to improve speed and quality.</li>
</ul>
<ul>
<li>Ability to communicate complex analyses in a clear, concise way through presentations, written narratives, and data visualizations for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Comfort working in an all-remote, asynchronous environment, with a high level of ownership and the ability to drive work forward independently.</li>
</ul>
<ul>
<li>Alignment with GitLab&#39;s values and openness to applying transferable skills from related analytics, revenue operations, or business intelligence roles.</li>
</ul>
<ul>
<li>Experience with dbt and modern analytics engineering patterns, including trusted marts, certified sources, and documented lineage, is helpful but not required.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Tableau, dbt, Snowflake, GenAI, data visualization, business intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8478359002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote, Bangalore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19d143c9-cac</externalid>
      <Title>Data Analytics Engineer</Title>
      <Description><![CDATA[<p>Our mission is to bring web3 to a billion people by providing builders with the tools they need to build exceptional onchain products. As a Data Analytics Engineer, you will be the data layer for the entire company, designing clean, trusted datasets that power our AI tooling and ensuring every team can make decisions from a single source of truth.</p>
<p>You will build and own the canonical data models in Snowflake that serve as Alchemy&#39;s company-wide source of truth, structure datasets so vendor AI tools perform optimally out of the box, and explore and prototype MCP integrations that let internal teams query data conversationally.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and owning the canonical data models in Snowflake that serve as Alchemy&#39;s company-wide source of truth</li>
<li>Structuring datasets so vendor AI tools perform optimally out of the box</li>
<li>Exploring and prototyping MCP integrations that let internal teams query data conversationally</li>
<li>Eliminating shadow tables and one-off datasets by proactively serving team data needs at the platform level</li>
</ul>
<p>Requirements include 6+ years in data engineering with strong SQL and deep Snowflake expertise, experience designing efficient, scalable analytical data models, and proficiency with dbt or comparable transformation frameworks.</p>
<p>Benefits include medical, dental, and vision coverage, gym reimbursement, home office build-out budget, in-office group meals, commuter benefits, flexible time off, wellbeing and mental health perks, learning and development stipend, company-sponsored conferences and events, HSA and FSA plans, and fertility benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200,000 - $240,000 annually</Salaryrange>
      <Skills>data engineering, SQL, Snowflake, dbt, MCP, AI tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Alchemy</Employername>
      <Employerlogo>https://logos.yubhub.co/alchemy.com.png</Employerlogo>
      <Employerdescription>Alchemy provides tools for building onchain products and powers 70% of top web3 teams.</Employerdescription>
      <Employerwebsite>https://www.alchemy.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/alchemy/jobs/4677021005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, New York, United States, San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a29684d-d2d</externalid>
      <Title>Senior Analytics Developer - Platform Analytics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Analytics Engineer to join our Platform Analytics team. In this role, you&#39;ll design and evolve core analytical data models that power trusted, self-service analytics across Elastic. You&#39;ll shape the underlying structure of our analytics layer,aligning definitions, improving usability, and enabling faster, more reliable insights for teams across the company.</p>
<p>This role goes beyond delivering within existing patterns. You&#39;ll improve foundational modeling decisions, reducing rework, and establishing standards that scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build core analytical data models in BigQuery using dbt</li>
<li>Refactor and restructure existing models to improve clarity, consistency, and ease of use</li>
<li>Partner directly with solution teams to translate business needs into well-defined, reusable data models</li>
<li>Define and enforce modeling standards, conventions, and layer contracts</li>
<li>Standardize identifiers and business logic early in the transformation layer to reduce downstream complexity</li>
<li>Centralize shared business rules and definitions to enable consistent, trusted analytics</li>
<li>Explore and apply AI-assisted approaches, to improve analytics workflows</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong expertise in Python and SQL and analytics data modeling</li>
<li>5+ years of experience in analytics engineering, data engineering, or a related role</li>
<li>Hands-on experience designing analytics layers in BigQuery and dbt</li>
<li>Proven ability to create analyst-friendly data models with clear structure and predictable behavior</li>
<li>Experience setting standards and influencing how data is modeled and consumed across teams</li>
<li>Strong analytical thinking and problem-solving skills</li>
<li>Clear written and verbal communication skills</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience working in a distributed or remote-first environment</li>
<li>Familiarity with metric definitions, or semantic layers</li>
<li>Experience applying AI or automation to analytics or data modeling workflows</li>
</ul>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $128,300-$203,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$128,300-$203,000 CAD</Salaryrange>
      <Skills>Python, SQL, analytics data modeling, BigQuery, dbt, AI-assisted approaches, metric definitions, semantic layers, AI or automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7614524?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f06742a2-a51</externalid>
      <Title>Senior Software Engineer (Data Platform)</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. Our engineering teams build technical products that fulfill real, important needs in the world. We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>
<p>As a Senior Software Engineer working on the Data Platform team, you will help build the Data Intelligence Platform for Databricks that will allow us to automate decision-making across the entire company. You will achieve this in collaboration with Databricks Product Teams, Data Science, Applied AI and many more. You will develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, data consumption layers etc. You will do this using the latest, bleeding-edge Databricks product and other tools in the data ecosystem - the team also functions as a large, production, in-house customer that dog foods Databricks and guides the future direction of the product.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and run the Databricks metrics store that enables all business units and engineering teams to bring their detailed metrics into a common platform for sharing and aggregation, with high quality, introspection ability and query performance.</li>
</ul>
<ul>
<li>Design and run the cross-company Data Intelligence Platform, which contains every business and product metric used to run Databricks. You’ll play a key role in developing the right balance of data protections and ease of shareability for the Data Intelligence Platform as we transition to a public company.</li>
</ul>
<ul>
<li>Develop tooling and infrastructure to efficiently manage and run Databricks on Databricks at scale, across multiple clouds, geographies and deployment types. This includes CI/CD processes, test frameworks for pipelines and data quality, and infrastructure-as-code tooling.</li>
</ul>
<ul>
<li>Design the base ETL framework used by all pipelines developed at the company.</li>
</ul>
<ul>
<li>Partner with our engineering teams to provide leadership in developing the long-term vision and requirements for the Databricks product.</li>
</ul>
<ul>
<li>Build reliable data pipelines and solve data problems using Databricks, our partner’s products and other OSS tools. Provide early feedback on the design and operations of these products.</li>
</ul>
<ul>
<li>Establish conventions and create new APIs for telemetry, debug, feature and audit event log data, and evolve them as the product and underlying services change.</li>
</ul>
<ul>
<li>Represent Databricks at academic and industrial conferences &amp; events.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL frameworks, metrics stores, infrastructure management, data security, large-scale messaging systems, workflow or orchestration frameworks, Airflow, DBT, Kafka, RabbitMQ</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks develops and operates a data and AI infrastructure platform for businesses.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7647369002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a3dc5a7-8d9</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>
<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>
<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>
<p>Responsibilities:</p>
<p>Data Modeling &amp; Semantics</p>
<ul>
<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>
<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>
<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>
<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>
</ul>
<p>AI-Ready Data &amp; Snowflake Ecosystem</p>
<ul>
<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>
<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>
<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>
</ul>
<p>Data Quality, Governance &amp; Trust</p>
<ul>
<li>Implement robust testing, validation, and documentation practices in dbt</li>
<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>
<li>Apply data governance best practices, including access controls, lineage, and auditability</li>
<li>Partner across teams to establish clear ownership and accountability for data assets</li>
</ul>
<p>Collaboration &amp; Delivery</p>
<ul>
<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>
<li>Support self-service analytics by building intuitive, reusable datasets</li>
<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>
<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>
</ul>
<p>AI and Data Mindset</p>
<ul>
<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>
<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>
<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>
<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>
<li>Strong SQL skills and experience building analytics-ready data models</li>
<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>
<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>
<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>
<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>
<li>Experience translating business requirements into clear, maintainable data logic</li>
<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>
<li>Experience with data quality, testing, and documentation best practices</li>
<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>
<li>Experience with BI tools such as Tableau or Looker</li>
<li>Strong communication skills and ability to work across technical and business teams</li>
</ul>
<p>What you can look forward to as an Okta employee!</p>
<ul>
<li>Amazing Benefits</li>
<li>Making Social Impact</li>
<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>
<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7818510?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25f010f0-7d1</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>
<p>What You’ll Do</p>
<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>
<p>Where you’ll work</p>
<p>This role will be based in our San Francisco office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>
</ul>
<ul>
<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>
</ul>
<ul>
<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>
</ul>
<ul>
<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>
</ul>
<ul>
<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>
</ul>
<ul>
<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>
</ul>
<ul>
<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>
</ul>
<ul>
<li>2+ years of experience working with modern data transformation tools like DBT.</li>
</ul>
<ul>
<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>
</ul>
<ul>
<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>
</ul>
<ul>
<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>
</ul>
<ul>
<li>Experience working with a data warehouse such as Snowflake.</li>
</ul>
<ul>
<li>Experience with a data workflow orchestrator tool such as Airflow.</li>
</ul>
<ul>
<li>Experience with a programming language such as Python.</li>
</ul>
<ul>
<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>
</ul>
<ul>
<li>Exceptional quantitative and analytical skills.</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,800 - $151,000</Salaryrange>
      <Skills>DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8366850002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c53ecdd3-dc7</externalid>
      <Title>Scale Solution Engineer</Title>
      <Description><![CDATA[<p>As a Scale Solution Engineer at Databricks, you will play a critical role in advising customers during their onboarding process. You will work directly with customers to help them onboard and deploy Databricks in their production environment.</p>
<p>Your impact will be significant, ensuring new customers have an excellent experience by providing technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions. You will also work directly with multiple customers concurrently to provide technical solutions.</p>
<p>To succeed in this role, you will need:</p>
<ul>
<li>An undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>1+ years experience in a technical role, preferably in the data or cloud field</li>
<li>Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP</li>
<li>Knowledge of a programming language such as Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow), Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent time management and prioritization skills</li>
<li>Excellent written and verbal communication</li>
</ul>
<p>Bonus: Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public cloud platforms, AWS, Azure, GCP, Python, Scala, SQL, Data Engineering technologies, ETL, DBT, Spark, Airflow, Data Warehousing technologies, Stored Procedures, Redshift, Snowflake, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8408817002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d22e39a-bde</externalid>
      <Title>Data Analyst II</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>
<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>
<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you’ll do</p>
<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>
<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>
<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>
<p>Where you’ll work</p>
<p>This role will be based in our San Francisco office.</p>
<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>
<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>
<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>
<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>
<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>
<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>
<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>
<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>
<p>Requirements</p>
<p>3+ years of experience in data analytics or a related role in a professional setting.</p>
<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>
<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>
<p>Experience with Python for data analysis, automation, or scripting.</p>
<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>
<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>
<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>
<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>
<p>Bonus points</p>
<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>
<p>Familiarity with dbt for data modeling and transformation.</p>
<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>
<p>Experience in fintech, financial services, or payments.</p>
<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>
<p>Compensation</p>
<p>The expected salary range for this role is $93,600 - $117,000.</p>
<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$93,600 - $117,000</Salaryrange>
      <Skills>SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463696002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25a942db-90e</externalid>
      <Title>Senior Data Analyst (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>What is the Developer Led Growth (DLG) Team?</strong></p>
<p>At the heart of our mission is a simple goal: to make choosing and integrating authentication a delightful experience for every developer.</p>
<p>We’re the DLG team, essentially Product Led Growth (PLG), but for developers using Auth0. We are a high-impact, multidisciplinary engine that includes Developer Relations, Developer Marketing, our Startup Program, Product Activation and Conversion. We don&#39;t just move metrics, we build the journey that turns curious developers into lifelong advocates.</p>
<p><strong>Why This Role?</strong></p>
<ul>
<li>Great Visibility: You will provide the analytical backbone for the entire DLG organisation, turning cross-functional data into a unified narrative.</li>
</ul>
<ul>
<li>Autonomy: We trust you to own your domain. This isn&#39;t a role for a cog in the machine, you’ll have the freedom to identify opportunities and drive the strategy.</li>
</ul>
<ul>
<li>True Variety: From analysing startup ecosystem trends to optimising conversion funnels, no two weeks will look the same.</li>
</ul>
<ul>
<li>Deep Collaboration: You’ll sit at the intersection of product, marketing, and community, working with stakeholders across the entire company to fuel our growth.</li>
</ul>
<p>If you’re a curious analyst who thrives in a fast-paced, developer first environment and wants to see their work directly influence how the world’s engineers build, we’d love to meet you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and maintain Tableau dashboards and reports related to the DLG business</li>
</ul>
<ul>
<li>Build and maintain DBT models and custom SQL queries to analyse DLG business</li>
</ul>
<ul>
<li>Investigate and understand discrepancies in the data</li>
</ul>
<ul>
<li>Help DLG automate and scale processes, find areas of opportunity for process improvements.</li>
</ul>
<ul>
<li>Analyse A/B experiments and DLG initiatives.</li>
</ul>
<ul>
<li>Collaborate with sales, ops, marketing, finance and product to understand their challenges, dive into the data and share insights that solve business problems.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>+5 years of experience working in data analytics-related fields.</li>
</ul>
<ul>
<li>Bachelor’s degree in math, statistics, computer science, economics or relevant field.</li>
</ul>
<ul>
<li>Strong proficiency in SQL and data modelling: cleaning, modelling and transforming large and complex datasets into usable and trusted models.</li>
</ul>
<ul>
<li>Familiarity with DBT (data build tool), Git and Snowflake data warehouse is preferred.</li>
</ul>
<ul>
<li>Working knowledge of visualisation tools, ideally Tableau, and Google Sheets.</li>
</ul>
<ul>
<li>Ability to analyse data discrepancies and troubleshoot data issues.</li>
</ul>
<ul>
<li>Excellent problem-solving skills with the ability to think critically and translate complex data into actionable insights.</li>
</ul>
<ul>
<li>Familiar with AI tooling, e.g. Copilot, Clay.</li>
</ul>
<ul>
<li>Excellent verbal and written communication skills, with the ability to collaborate with cross-functional teams including technical teams (data engineering) and business users (marketing, product, revenue, etc).</li>
</ul>
<ul>
<li>Stakeholder management and prioritisation skills</li>
</ul>
<p>Nice to have</p>
<ul>
<li>PLG experience</li>
</ul>
<ul>
<li>Experience building and working with AI Agents</li>
</ul>
<ul>
<li>Translating complex developer behaviour into product insights</li>
</ul>
<ul>
<li>AB testing experimentation experience</li>
</ul>
<ul>
<li>Revenue modelling and forecasting knowledge</li>
</ul>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between: $114,000-$156,200 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$114,000-$156,200 USD</Salaryrange>
      <Skills>SQL, DBT, Git, Snowflake data warehouse, Tableau, Google Sheets, AI tooling, Copilot, Clay, PLG experience, Experience building and working with AI Agents, Translating complex developer behaviour into product insights, AB testing experimentation experience, Revenue modelling and forecasting knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7683013?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ce3bb01-4a1</externalid>
      <Title>Scale Solutions Engineer</Title>
      <Description><![CDATA[<p>At Databricks, we aim to empower our customers to solve the world&#39;s most challenging data problems using the Data Intelligence platform. As a Scale Solution Engineer, you will be critical in advising customers during their onboarding. You will work directly with customers to help them onboard and deploy Databricks in their production environment and accelerate Databricks features adoption.</p>
<p>The impact you will have:</p>
<ul>
<li>Ensure new customers have an excellent experience by providing technical assistance early in their journey</li>
<li>Become an expert on the Databricks Platform and guide customers in making the best technical decisions</li>
<li>Work directly with multiple customers concurrently to provide technical solutions</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>3+ years experience in a customer-facing technical role in pre-sales, professional services, consulting or customer success</li>
<li>Experience in one or more of the following:</li>
</ul>
<ul>
<li>Solid understanding of the end-to-end data analytics workflow</li>
<li>Excellent time management and prioritization skills</li>
<li>Knowledge of public cloud platforms AWS, Azure or GCP would be a plus</li>
<li>Knowledge of a programming language - Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow)</li>
<li>Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent written and verbal communication, in English and Portuguese</li>
<li>Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models).</li>
<li>Databricks certification(s)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Data Engineering, Data Warehousing, Python, Scala, SQL, AWS, Azure, GCP, ETL, DBT, Spark, Airflow, Redshift, Snowflake, English, Portuguese, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8391865002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a372c4e5-b8f</externalid>
      <Title>Data Engineer II - Platform Analytics - Kibana Platform - AppEx</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Data Engineer to join our Platform Analytics team. In this role, you&#39;ll help build and maintain scalable data pipelines and analytics solutions that support business, product, and technical use cases across Elastic. You&#39;ll work closely with cross-functional partners to deliver reliable, high-quality data in a fast-moving, distributed environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, enhance, and maintain data ingestion and transformation pipelines</li>
<li>Develop and optimize analytics datasets using BigQuery and dbt</li>
<li>Support and maintain existing data systems as needed to ensure continuity and data reliability</li>
<li>Design scalable data models that enable trusted analytics and reporting</li>
<li>Partner with product managers, analysts, and solution teams to translate ambiguous requirements into effective data solutions</li>
<li>Monitor data quality and system health to ensure accurate, timely insights</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong experience with SQL and Python</li>
<li>3+ years of experience in Data Engineering, preferably on Google Cloud Platform (GCP)</li>
<li>Experience designing and operating production data pipelines at scale</li>
<li>Good knowledge of architecture and design (patterns, reliability, scalability, quality) of complex systems</li>
<li>Familiarity with BigQuery and modern ELT tools (e.g., dbt)</li>
<li>Experience with AI tools and workflows</li>
<li>Strong analytical and problem-solving skills</li>
<li>Clear written and verbal communication skills</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with Buildkite and Terraform</li>
<li>Experience with Dataflow on GCP</li>
<li>Experience with Elasticsearch</li>
<li>Experience with Kubernetes</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
<li>Health coverage for you and your family in many locations</li>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
<li>Generous number of vacation days each year</li>
<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>
<li>Up to 40 hours each year to use toward volunteer projects you love</li>
<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, BigQuery, dbt, Google Cloud Platform (GCP), AI tools and workflows, Buildkite, Terraform, Dataflow on GCP, Elasticsearch, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic&apos;s Search AI Platform brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter, used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7614519?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Greece</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cece3778-5b8</externalid>
      <Title>Finance Systems Integration Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>
<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>
<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Core Focus: Integration Development &amp; ERP Support</strong></p>
<ul>
<li>Design, build, and maintain integrations connecting ERP systems with downstream applications including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions</li>
</ul>
<ul>
<li>Support integration development and testing during the ERP implementation projects</li>
</ul>
<ul>
<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication</li>
</ul>
<ul>
<li>Implement real-time and batch integration patterns supporting high-volume financial transactions</li>
</ul>
<ul>
<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity</li>
</ul>
<ul>
<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures</li>
</ul>
<ul>
<li>Collaborate with implementation consulting partners and vendors on technical integration requirements</li>
</ul>
<p><strong>Additional Scope: AI Automation &amp; Data Infrastructure</strong></p>
<ul>
<li>Build and deploy Claude-powered AI agents that automate financial operations including intelligent document processing, workflow automation, financial audit and reconciliations, and self-service reporting</li>
</ul>
<ul>
<li>Design agentic workflows that leverage Claude API capabilities integrated with ERP platform data and processes</li>
</ul>
<ul>
<li>Create automated validation and quality assurance processes for AI-generated outputs</li>
</ul>
<ul>
<li>Partner with Finance teams to identify automation opportunities and translate requirements into AI agent solutions</li>
</ul>
<ul>
<li>Support data pipeline development using Airflow for workflow orchestration and dbt for data transformation</li>
</ul>
<ul>
<li>Build and maintain data flows from ERP and other financial systems into BigQuery for analytics and reporting</li>
</ul>
<ul>
<li>Implement data quality checks and testing frameworks for financial data pipelines</li>
</ul>
<ul>
<li>Collaborate with Data Infrastructure team on pipeline architecture, performance optimization, and security monitoring</li>
</ul>
<ul>
<li>Support executive dashboards and financial analytics by ensuring timely, accurate data delivery</li>
</ul>
<p><strong>Governance &amp; Collaboration</strong></p>
<ul>
<li>Maintain comprehensive documentation for integrations, AI agents, and data pipelines</li>
</ul>
<ul>
<li>Support internal and external audits with technical evidence and system access reviews</li>
</ul>
<ul>
<li>Collaborate with Finance Systems Engineers on operational support, troubleshooting, and enhancement requests</li>
</ul>
<ul>
<li>Partner with Finance Operations, Accounting, FP&amp;A, Engineering, and Data Infrastructure teams to deliver holistic solutions</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience in integration development, data engineering, or systems engineering roles</li>
</ul>
<ul>
<li>Hands-on experience with iPaaS platforms such as Workato, MuleSoft, Dell Boomi, or similar integration tools</li>
</ul>
<ul>
<li>Strong programming skills in Python and/or JavaScript/TypeScript for building custom integrations, APIs, and automation scripts</li>
</ul>
<ul>
<li>Experience with data pipeline tools including Airflow for orchestration and dbt for transformation</li>
</ul>
<ul>
<li>Working knowledge of cloud data platforms such as BigQuery, Snowflake, or Databricks</li>
</ul>
<ul>
<li>Understanding of REST API design patterns, webhooks, OAuth 2.0, and modern integration architectures</li>
</ul>
<ul>
<li>Familiarity with ERP systems (Oracle Fusion, Workday Financials, or similar) and financial business processes</li>
</ul>
<ul>
<li>Strong problem-solving skills with ability to debug complex integration issues across multiple systems</li>
</ul>
<ul>
<li>Excellent communication skills to collaborate with technical and business stakeholders</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with high-growth technology companies scaling through rapid revenue expansion (5x-10x growth)</li>
</ul>
<ul>
<li>Background in AI/ML companies with familiarity in modern SaaS business models including consumption-based pricing, usage metering platforms, and marketplace billing</li>
</ul>
<ul>
<li>Hands-on experience with specific platforms: Workday Financials (Workday Studio, EIB, custom reports, Prism Analytics)</li>
</ul>
<ul>
<li>Technical expertise with modern finance tech stack including Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management</li>
</ul>
<ul>
<li>Programming skills in Python / JavaScript, or similar languages for building custom integrations, APIs, and automation scripts</li>
</ul>
<ul>
<li>Experience with AI/LLM integration for financial operations, including document processing, data extraction, intelligent automation, and agentic workflows (familiarity with Claude models and API is a plus)</li>
</ul>
<ul>
<li>Hands-on experience with modern data stack tools: BigQuery/Snowflake/Databricks, dbt for data transformation, Airflow for workflow orchestration</li>
</ul>
<ul>
<li>Professional certifications such as Workato, Workday integrations, or relevant technical credentials</li>
</ul>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Information Systems, Accounting, Finance, Engineering, or related technical/business field</li>
</ul>
<ul>
<li>Experience with business intelligence and financial reporting tools (Hex, Looker, Tableau, Power BI) for executive dashboards and financial analytics</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, Airflow, dbt, BigQuery, Snowflake, Databricks, REST API design patterns, webhooks, OAuth 2.0, modern integration architectures, ERP systems, financial business processes, high-growth technology companies, AI/ML companies, SaaS business models, consumption-based pricing, usage metering platforms, marketplace billing, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, Python/JavaScript, AI/LLM integration, document processing, data extraction, intelligent automation, agentic workflows, Claude models, API, BigQuery/Snowflake/Databricks, professional certifications, Workato, Workday integrations, technical credentials, Computer Science, Information Systems, Accounting, Finance, Engineering, business intelligence, financial reporting tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a leading AI company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155195008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f904cf7-7bd</externalid>
      <Title>Data Analyst II</Title>
      <Description><![CDATA[<p>Join us at Brex, the intelligent finance platform that empowers companies to spend smarter and move faster in over 200 markets. As a Data Analyst II, you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>
<p>As a member of our Data organization, you will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses. This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>
<p>Responsibilities:</p>
<ul>
<li>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</li>
<li>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</li>
<li>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</li>
<li>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</li>
<li>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</li>
<li>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</li>
<li>Contribute to the automation of recurring analyses and reporting workflows using Python.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience in data analytics or a related role in a professional setting.</li>
<li>3+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</li>
<li>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</li>
<li>Proficiency in Python for data analysis, automation, and scripting (Pandas, NumPy, and similar libraries).</li>
<li>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</li>
<li>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</li>
<li>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</li>
<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</li>
<li>Familiarity with dbt for data modeling and transformation.</li>
<li>Exposure to data pipeline orchestration tools (e.g., Airflow).</li>
<li>Experience in fintech, financial services, or payments.</li>
<li>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, financial services, or payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex LLC</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463703002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>São Paulo, São Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8ea1c59a-b10</externalid>
      <Title>Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Behind many of life&#39;s most important transactions , buying a house, applying for a mortgage, getting a small business loan, or refinancing a credit card , is a network of credit relationships. Setpoint provides critical infrastructure for relationships between the world&#39;s largest banks, credit funds and capital markets counterparties.</p>
<p>We are looking for an Analytics Engineer to join our team supporting our external data products. In this position, you will be reporting into our Head of Analytics and partnering closely with engineering, product, and implementation teams to build and maintain the data infrastructure that powers our products. This is a hands-on role where you&#39;ll design and maintain data pipelines, create dashboards, and work directly with stakeholders to deliver insights that drive business outcomes.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own end-to-end implementations for the data products for our external customers.</li>
<li>Build and maintain scalable data pipelines and analytics models using Python, DBT, and SQL.</li>
<li>Create and manage dashboards in our external and internal analytics products so stakeholders have actionable insights.</li>
<li>Prototype new data products to add to our product suite.</li>
<li>Collaborate with engineering to drive the roadmap for the data products.</li>
<li>Use GitHub-based workflows to maintain clean, version-controlled analytics code.</li>
<li>Act as a subject matter expert for analytics best practices.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience in analytics engineering, data engineering, or a similar technical role.</li>
<li>Proven expertise DBT, SQL, and GitHub-based development. Python is a plus.</li>
<li>Strong experience designing, implementing, and maintaining data pipelines and models.</li>
<li>Experience in a customer-facing role, working with technical and business stakeholders.</li>
<li>Experience working with asset managers, business owners, or financial services data sets is a plus.</li>
<li>Excellent problem-solving, communication, and collaboration skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salaries</li>
<li>Stock options</li>
<li>Medical, dental, and vision coverage</li>
<li>401(k)</li>
<li>Short term and long term disability coverage</li>
<li>Flexible vacation</li>
</ul>
<p><strong>Compensation</strong></p>
<p>$150,000 - $170,000 OTE dependent on multiple factors, which may include the candidate&#39;s skills, experience, location, and other qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$150,000 - $170,000</Salaryrange>
      <Skills>Python, SQL, DBT, GitHub, Data engineering, Analytics engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Setpoint</Employername>
      <Employerlogo>https://logos.yubhub.co/setpoint.com.png</Employerlogo>
      <Employerdescription>Setpoint provides critical infrastructure for relationships between the world&apos;s largest banks, credit funds and capital markets counterparties.</Employerdescription>
      <Employerwebsite>https://setpoint.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/setpoint/jobs/4801438007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Austin or New York (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>886118d3-6a1</externalid>
      <Title>Senior Data Engineer - Data Engineering</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights-based products. Making data-driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data.</p>
<p>Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows.</p>
<p>We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid&#39;s data strategy and a data-first mindset.</p>
<p>Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team.</p>
<p>We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them.</p>
<p>You will be in a high-impact role that will directly enable business leaders to make faster and more informed business judgments based on the datasets you build.</p>
<p>You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on.</p>
<p>You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team.</p>
<p>You will collaborate with and have strong and cross-functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles.</li>
</ul>
<ul>
<li>Have data quality and performance top of mind while designing datasets</li>
</ul>
<ul>
<li>Leading key data engineering projects that drive collaboration across the company.</li>
</ul>
<ul>
<li>Advocating for adopting industry tools and practices at the right time</li>
</ul>
<ul>
<li>Owning core SQL and python data pipelines that power our data lake and data warehouse.</li>
</ul>
<ul>
<li>Well-documented data with defined dataset quality, uptime, and usefulness.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale.</li>
</ul>
<ul>
<li>You&#39;ve have experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)</li>
</ul>
<ul>
<li>You value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow.</li>
</ul>
<ul>
<li>You have experience working with different performant warehouses and data lakes; Redshift, Snowflake, Databricks.</li>
</ul>
<ul>
<li>You have experience building and maintaining batch and real-time pipelines using technologies like Spark, Kafka.</li>
</ul>
<ul>
<li>You appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data.</li>
</ul>
<ul>
<li>You are excited to try out new technologies. You like to produce proof-of-concepts that balance technical advancement and user experience and adoption.</li>
</ul>
<ul>
<li>You like to get deep in the weeds to manage, deploy, and improve low-level data infrastructure.</li>
</ul>
<ul>
<li>You are empathetic working with stakeholders. You listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs.</li>
</ul>
<ul>
<li>You are a champion for data privacy and integrity, and always act in the best interest of consumers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p>We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p>We are always looking for team members that will bring something unique to Plaid!</p>
<p>Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws.</p>
<p>Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>SQL, Python, DBT, Airflow, Redshift, ElasticSearch, Atlanta, Retool, Spark, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and services for developers to connect financial accounts to applications and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/022278b3-0943-44b3-a54b-1de421017589?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2324ce80-532</externalid>
      <Title>Data Scientist - Network Value</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The Network Value Data Science team is helping Plaid build an industry leading fintech consumer network by increasing access to, authorization for, and usability of Plaid&#39;s User&#39;s financial footprints. We embed within product teams to support OKRs and help execute on product roadmaps. We translate ambiguous product questions into tractable analysis, serve as analytical thought partners throughout the org, identify opportunities to build better products, and champion a data-first decision making approach everywhere we go.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Perform ad-hoc and strategic analyses to uncover opportunities for improved business outcomes and translate complex questions into actionable analytics projects.</li>
<li>Design and maintain scalable data models and dashboards that increase visibility into core systems and drive operational excellence.</li>
<li>Build and iterate on machine learning prototypes to power insight-driven products and unlock new sources of customer and business value.</li>
<li>Define and track OKRs that quantify progress toward key business goals, ensuring alignment and accountability across teams.</li>
<li>Design and analyze experiments to guide product decisions and optimize feature launches.</li>
<li>Champion a data-first culture by promoting analytical rigor and evidence-based decision-making across the organization.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>2+ years of experience as a Data Scientist or in a related analytics or data-focused role</li>
<li>Strong track record of turning complex data into strategic insights and measurable business impact</li>
<li>Proven ability to use experimentation, advanced analytics, and data storytelling to uncover opportunities that drive key product and business outcomes</li>
<li>Strong technical foundation in SQL and Python for large-scale analysis, data modeling, and ML prototyping</li>
<li>Experience developing and maintaining data pipelines and metrics frameworks using tools such as Airflow and dbt</li>
<li>Background working with complex backend systems, ensuring data integrity, scalability, and operational reliability across platforms</li>
<li>Skilled at partnering cross-functionally with product, engineering, and business teams to influence prioritization and strategy through clear, data-driven communication</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$176,400-$243,600 per year</Salaryrange>
      <Skills>SQL, Python, Machine Learning, Data Modeling, Data Pipelines, Metrics Frameworks, Airflow, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a fintech company that builds tools and experiences for developers to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/18503c02-17a0-4c47-98c8-155b0b6ccc2a?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd05f3e3-531</externalid>
      <Title>Data/Analytics Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI TAGline Removed
We are seeking passionate and talented Data/Analytics Engineers to join our team.</p>
<p>In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure. You will work with large volumes of data, enabling product teams to access secure and reliable data quickly. Your contributions will support our science team in enhancing the quality of our state-of-the-art AI models and help business users make informed decisions.</p>
<p>Responsibilities</p>
<p>• Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.
• Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions, eg work closely with machine learning teams to support model training, deployment pipelines, and feature stores.
• Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.
• Define and enforce data governance, metadata management, and data lineage standards.
• Ensure data integrity, security, and compliance with industry standards.</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering, Statistics, or a related field.
• 3+ years of experience in data engineering, analytics engineering, or a related role.
• Proficiency in Python and SQL.
• Experience with dbt.
• Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).
• Strong analytical and problem-solving skills, with attention to detail.
• Ability to communicate complex data concepts to both technical and non-technical stakeholders.</p>
<p>Nice to Have</p>
<p>• Experience with machine learning pipelines, MLOps, and feature engineering.
• Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
• Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).
• Background in building self-service data platforms for analytics and AI use cases.</p>
<p>Hiring Process</p>
<p>• Intro call with Recruiter - 30 min
• Hiring Manager Interview - 30 min
• Technical interview - Live Coding (Python/SQL) - 45 min
• Technical interview - System Design - 45 min
• Value talk interview - 30 mins
• References</p>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>
<p>What We Offer</p>
<p>💰 Competitive salary and equity package
🧑‍⚕️ Health insurance
🚴 Transportation allowance
🥎 Sport allowance
🥕 Meal vouchers
💰 Private pension plan
🍼 Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. Its comprehensive AI platform meets on-premises and cloud-based needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ed94ae42-58d</externalid>
      <Title>Data Scientist</Title>
      <Description><![CDATA[<p>We are seeking skilled and motivated Data Scientists to join our team in Paris. You will work at the intersection of product, research, and engineering. You will analyze user behavior, optimize product performance, and design data-driven features that enhance our AI product suite.</p>
<p>This role is ideal for someone who thrives in a dynamic environment, enjoys tackling ambiguous problems, and wants to influence our product roadmap directly. The ideal candidate will have a strong background in data analysis, statistical modeling, and machine learning, with a proven track record of delivering high-quality results in a fast-paced environment.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining key metrics, understanding in-depth the performance of our AI products, and identifying opportunities for improvement.</li>
<li>Designing and analyzing experiments (A/B tests, causal inference) to validate hypotheses and guide product decisions.</li>
<li>Designing and implementing end-to-end data science projects, from data collection and preprocessing to model building, evaluation, and deployment.</li>
<li>Evaluating model performance in training and production, identifying edge cases, and developing frameworks to measure user satisfaction.</li>
<li>Working with engineering to ensure data quality and accessibility for analytics and ML.</li>
</ul>
<p>Nice-to-have skills include familiarity with DBT or similar tools, and experience working on customer-facing products, enterprise products, and with the science team.</p>
<p>By joining our team, you will have the opportunity to work on exciting projects, collaborate with talented individuals, and contribute to the development of cutting-edge AI solutions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Statistical modeling, Machine learning, Data analysis, DBT, Customer-facing products, Enterprise products, Science team</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a/swiftly growing company that specializes in developing high-performance, open-source, and cutting-edge AI models, products, and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/bf5bcae2-839b-492e-a5bc-11d4427ee843?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e503559e-cf7</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>
<p><strong>Job Description:</strong></p>
<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>
<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>
<p><strong>About Us:</strong></p>
<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>
<p><strong>Job Responsibilities:</strong></p>
<p>As part of this role, you will:</p>
<ul>
<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>
<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>
<li>Implement testing, observability, alerting, and disaster recovery for all services</li>
<li>Implement tracing, performance, and regression testing</li>
<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<p>The ideal candidate for the role has:</p>
<ul>
<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>
<li>Expertise with:</li>
</ul>
<ul>
<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>
<li>SQL, dbt, Python</li>
<li>OLAP / OLTP data modelling and architecture</li>
<li>Key-value stores: Redis, dynamoDB, or equivalent</li>
<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>
<li>API frameworks: FastAPI, Flask, etc.</li>
<li>Production ML Service experience</li>
<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>
</ul>
<p><strong>Total Rewards Package:</strong></p>
<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p><strong>Salary Range:</strong></p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $200,700 - $250,900</li>
<li>Canadian employees (any location): CAD 189,700 - 237,100</li>
</ul>
<p><strong>Diversity &amp; Belonging:</strong></p>
<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)</Salaryrange>
      <Skills>Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5639559004?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8a8c0eb9-6e6</externalid>
      <Title>Data Scientist, Product</Title>
      <Description><![CDATA[<p><strong>Job Title: Data Scientist, Product</strong></p>
<p>This is the founding hire for product analytics at Hebbia. As a data scientist, you will define what our core product metrics are: what counts as an active user, what engagement actually means, what signals correlate with retention.</p>
<p>This is not a dashboarding role. The goal is to shape product decisions with data, not just report on them. You will identify which workflows drive repeat usage, where users drop off, what features move engagement, and what differentiates power users from casual users across our enterprise customer base.</p>
<p>The role sits at the intersection of analytics engineering, product analytics, and data science. You will build the infrastructure and do the analysis. Define the metrics, build the pipelines, create the dashboards, and use what you built to inform the roadmap.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement Hebbia&#39;s core product metrics from scratch: active users, engagement, retention, feature adoption, account health. Build the canonical definitions the entire company uses.</li>
<li>Design and build the product analytics infrastructure: fact tables, clean data models, and the analytics layer that sits on top of our product data.</li>
<li>Build and maintain executive and product dashboards that leadership and product teams use to make decisions.</li>
<li>Write DAGs, transforms, and data pipelines that support analytics. Work with engineering to instrument the product so usage data is captured correctly.</li>
<li>Analyze customer behavior across our B2B customer base: account-level usage patterns, workflow adoption, expansion signals, and churn risk indicators.</li>
<li>Inform the product roadmap using data. Identify friction in user flows, surface feature adoption patterns, and highlight opportunities for product improvement.</li>
<li>Partner with product managers and engineers to translate product questions into measurable data and structured experiments.</li>
<li>Establish data quality standards and documentation so the metrics layer you build is trusted and maintained.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>3+ years of experience in product analytics, analytics engineering, or data science at a B2B SaaS company or high-growth startup</li>
<li>Strong in SQL and Python. You can write production-quality transforms, not just ad hoc queries.</li>
<li>Experience with modern data stack tools: dbt, Airflow, Snowflake, BigQuery, or similar. You understand data modeling and warehouse architecture.</li>
<li>You have built dashboards and reporting that product teams and leadership actually use to make decisions</li>
<li>You understand B2B product analytics: account-level metrics, multi-user workflows, enterprise engagement patterns, and why B2B retention analysis is different from consumer</li>
<li>You translate ambiguous product questions into structured analyses. You do not wait for someone to hand you a spec.</li>
<li>Strong product intuition. You care about why users behave the way they do, not just what the numbers say.</li>
<li>Clear communicator. You can present findings to engineers, product managers, and executives with equal effectiveness.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The salary range for this position is set between $180,000 to $260,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $260,000</Salaryrange>
      <Skills>SQL, Python, dbt, Airflow, Snowflake, BigQuery, data modeling, warehouse architecture, product analytics, analytics engineering, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside. Founded in 2020, Hebbia powers investment decisions for major asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4670090005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>eb26af8f-c1a</externalid>
      <Title>Data Scientist</Title>
      <Description><![CDATA[<p>We are seeking a pragmatic, end-to-end Data Scientist who can operate across the full data lifecycle, from ingestion and modeling to productionizing key data systems. This is a high-impact, high-agency role which reports directly to the CTO. Modern AI-assisted development tools make this role possible, where the data scientist can now do real engineering, too.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate closely with other teams (Sales, Finance, Product, Marketing, and more) to translate problems and needs into action-oriented data solutions</li>
<li>Design, build, and maintain data pipelines for reliable ingestion and transformation</li>
<li>Rapidly prototype and iterate using AI coding tools to accelerate development and reduce toil</li>
<li>Drive rigor and best practices, with a focus on data quality, consistency, and transparency</li>
<li>Develop and deploy statistical models and machine learning, where appropriate</li>
<li>Build clear, decision-oriented visualizations and dashboards for stakeholders across multiple departments</li>
<li>Own selected production data systems: selection, orchestration, monitoring, and tuning</li>
<li>Communicate and shepherd key data results and analysis to executives</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Experience with B2B SaaS-relevant data, including Salesforce and financial metrics</li>
<li>Strong communication skills and ability to work effectively across multiple departments and stakeholder groups</li>
<li>Ownership mindset and ability to deliver end-to-end outcomes independently; must be a &quot;startup type&quot;</li>
<li>Demonstrated ability to design data pipelines and work with imperfect, evolving data sources</li>
<li>Sharp attention to data quality, including validation, anomaly detection, and root-cause analysis of inconsistencies</li>
<li>Strong proficiency in Python and SQL; experience with modern data stack tools (e.g., dbt, Airflow, Spark, or equivalents, a plus)</li>
<li>Experience with data visualization tools (e.g., Tableau, Looker, or similar)</li>
<li>Some familiarity with infrastructure and related setup (databases, data warehouses, VMs)</li>
<li>Knowledge of core machine learning concepts and when to apply them pragmatically</li>
</ul>
<p><strong>Initial Projects:</strong></p>
<ul>
<li>Build a likelihood-of-close model for Salesforce opportunities, which factors in relevant metadata and history</li>
<li>Create a framework and initial implementation for an executive operational dashboard, working with a broad set of teams</li>
<li>Define, validate, and implement key SaaS product-usage metrics</li>
</ul>
<p>As we grow, you will, too, with the broad scope of a software startup.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$170,000 - $190,000</Salaryrange>
      <Skills>Python, SQL, data visualization, machine learning, data pipelines, data quality, dbt, Airflow, Spark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Forward Networks</Employername>
      <Employerlogo>https://logos.yubhub.co/forward.net.png</Employerlogo>
      <Employerdescription>Forward Networks is a software company founded in 2013 by four Stanford Ph.D.s, providing network digital twins for IT teams.</Employerdescription>
      <Employerwebsite>https://www.forward.net/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/forwardnetworks/jobs/7695301003?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Santa Clara, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3d849fbc-058</externalid>
      <Title>Member of Product, Data Platform</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>
<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>
<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>
<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>
<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>
<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>
<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>
<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>
<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>
<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>
<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>
<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>
<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>
<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>
<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>
<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>
<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>
<li>Drive company objectives through strategic data platform decisions and initiatives.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>
<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>
<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>
<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>
<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>
<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>
<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>
<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>
<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>
<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>
<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>
</ul>
<p><strong>Although not a requirement, bonus points if you have:</strong></p>
<ul>
<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>
<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>
<li>You have experience supporting clients with data-related issues or concerns.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0b1fb5b7-d63</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Data Platform Engineer to join our team. As a Data Platform Engineer, you will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.</p>
<p>Key responsibilities include:
Building for Scale: You will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.
Mastering the Orchestration: You’ll dive deep into Kubernetes, leveraging Operators and Helm to automate complex data workflows and platform management. Building out kube native data and AI architecture.
Bridging the Clouds: You will improve our existing tooling and implement new, seamless integrations between our AWS and GCP environments.
Defining our State: You’ll use Terraform to manage and define our entire data infrastructure through code, ensuring reproducibility and transparency across the stack.</p>
<p>Requirements:
K8s Expertise: You have a solid understanding and practical experience with Kubernetes, specifically working with Operators and Helm to manage complex application lifecycles.
The Engineer&#39;s Mindset: You are proficient in Python or Java and enjoy writing clean, efficient code to solve infrastructure challenges.
Cloud Native: You are comfortable working in at least one of the major cloud providers (AWS or GCP) and understand how to get the best out of their managed services.
Optimising and refine: current data infrastructure, and deploying greenfield kube native OSS projects</p>
<p>Bonus points if you have:
Experience with SQL-based transformation workflows, specifically using dbt within BigQuery.
Familiarity with streaming and ingestion tech like Kafka or Debezium.
A background in Linux administration or data management best practices.</p>
<p>Interview process:
Interviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you!
Our interviews are conversational and we want to get the best from you, so come with questions and be curious.
In general, you can expect the below, following a chat with one of our Talent Team:
Stage 1 - 30 minutes with one of the team
Stage 2 - Take-home challenge
Stage 3 - 60 minutes technical interview with two team members
Stage 4 - 45 minutes final with two data executives</p>
<p>Benefits:
25 days holiday (plus take your public holiday allowance whenever works best for you)
An extra day’s holiday for your birthday
Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off
16 hours paid volunteering time a year
Salary sacrifice, company-enhanced pension scheme
Life insurance at 4x your salary &amp; group income protection
Private Medical Insurance with VitalityHealth including mental health support and cancer care.
Partner benefits include discounts with Waitrose, Mr&amp;Mrs Smith and Peloton
Generous family-friendly policies
Perkbox membership giving access to retail discounts, a wellness platform for physical and mental health, and weekly free and boosted perks
Access to initiatives like Cycle to Work, Salary Sacrificed Gym partnerships and Electric Vehicle (EV) leasing</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, Java, Terraform, AWS, GCP, SQL, dbt, BigQuery, Kafka, Debezium, Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank operating in the UK, employing over 3,000 people across multiple locations.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/1EA5EDDAD9?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>4b700ee3-482</externalid>
      <Title>Analytics Engineer (Finance)</Title>
      <Description><![CDATA[<p>We are looking for an Analytics Engineer to join our team. As an Analytics Engineer, you will be responsible for translating data requirements from across the organisation into robust and reusable data models, with a particular focus on financial regulatory submissions or financial analytics.</p>
<p>Maintain consistent and clear documentation and communicate with business stakeholders (both technical and non-technical).</p>
<p>Collaborate with the wider data team to help meet the business goals, including peer reviews.</p>
<p>Take ownership of a project end-to-end and manage priorities accordingly.</p>
<p>Our ideal candidate will have strong experience with SQL, experience working within the credit domain, and be a self-starter with the ability to think outside the box.</p>
<p>They will also have good attention to detail, strong experience with Looker or a similar visualisation tool, and strong communication and documentation skills for both technical and non-technical audiences.</p>
<p>As a member of our team, you will have the opportunity to work on a wide range of projects and contribute to the development of our data capabilities.</p>
<p>We offer a competitive salary and benefits package, including 25 days holiday, an extra day&#39;s holiday for your birthday, and annual leave increased with length of service.</p>
<p>We are an equal opportunities employer and welcome applications from all qualified candidates.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Looker, credit domain, data modelling, financial analytics, dbt, data visualisation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that provides financial services. It has over 3.5 million accounts and employs over 2,800 people across five offices.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D74D88F51C?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Southampton</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>65e7bd92-c31</externalid>
      <Title>FBS Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
</ul>
<ul>
<li>A dynamic, diverse, and multicultural work environment</li>
</ul>
<ul>
<li>Leaders with deep market knowledge and strategic vision</li>
</ul>
<ul>
<li>Continuous learning and development</li>
</ul>
<p><strong>Team Function</strong> The Direct modeling team is focused on creating models to guide enterprise marketing decision that will help to promote brand awareness as well as boost sales through direct channel.</p>
<p><strong>Role Description:</strong></p>
<p>This position plays a crucial role in the data ecosystem by iteratively transforming raw data into structured, high-quality datasets that are ready for analysis in partnership with data/decision scientists. The role primarily focuses on moderately complex business problems while receiving limited coaching and guidance from data leadership. The role combines the technical skills of a data engineer, the analytical mindset of a data analyst, and strong business acumen to ensure data is not only collected and stored efficiently but also made accessible and insightful for end users. In partnership with data/decision scientists, the position is responsible for end-to-end data workflow including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization. This position requires deep understanding of data engineering, business processes, and analytics principles as well as a proactive approach to solving complex data challenges.</p>
<p><strong>Essential Job Functions:</strong></p>
<p><strong>1) Data infrastructure development</strong>: Pipeline Design and Development; Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as dbt (Data Build Tool), Apache Airflow, or similar. Automates data ingestion processes from various sources including databases, APIs, and third party services. Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery. Optimizes storage solutions for performance, cost efficiency, and scalability.</p>
<p><strong>2) Data modeling and transformation:</strong> Data Modeling - Develops and maintains logical and physical data models to support business analytics. Creates and manages dimensional models, star/snowflake schemas, and other data structures. Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages. Implements data transformation workflows to handle data cleansing, normalization, and enrichment. Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data. Implements data quality monitoring and alerting mechanisms.</p>
<p><strong>3) Collaboration and stakeholder management:</strong> Cross-Functional Collaboration - Works closely with data analysts, data scientists, and business stakeholders to gather requirements and understand their data needs. Acts as a liaison between technical teams and business units to translate business requirements into technical specifications. Technical Communication - Clearly communicates complex technical concepts and data insights to non-technical stakeholders. Provides training and support to team members on data tools, best practices, and methodologies.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Over 4 years of experience in data development and analytics engineering using Python, SQL, DBT and Snowflake.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Data Science, Engineering or other Math or Technology related degrees.</li>
</ul>
<ul>
<li>Fluency in English</li>
</ul>
<p><strong>Software / Tools</strong></p>
<ul>
<li>SQL (must have)</li>
</ul>
<ul>
<li>Python (must have)</li>
</ul>
<ul>
<li>Snowflake (must have)</li>
</ul>
<ul>
<li>DBT (must have)</li>
</ul>
<p><strong>Other Critical Skills</strong></p>
<ul>
<li>Data Transformation</li>
</ul>
<ul>
<li>Data Quality Assurance</li>
</ul>
<ul>
<li>Pipeline Design and Development</li>
</ul>
<ul>
<li>Technical Communication</li>
</ul>
<ul>
<li>Independent work</li>
</ul>
<ul>
<li>Orientation to detail</li>
</ul>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
</ul>
<ul>
<li>Comprehensive benefits package.</li>
</ul>
<ul>
<li>Flexible work arrangements (remote and/or office-based).</li>
</ul>
<ul>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
</ul>
<ul>
<li>Private Health Insurance.</li>
</ul>
<ul>
<li>Paid Time Off.</li>
</ul>
<ul>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Snowflake, DBT, Data Transformation, Data Quality Assurance, Pipeline Design and Development, Technical Communication, Independent work, Orientation to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company that provides a range of services including technology consulting, application services, and business process outsourcing.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ws76jLTZQ1JKbCcs3CUiC4/remote-fbs-analytics-engineer-in-brazil-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>5aabf454-ae0</externalid>
      <Title>FBS Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
<li>A dynamic, diverse, and multicultural work environment</li>
<li>Leaders with deep market knowledge and strategic vision</li>
<li>Continuous learning and development</li>
</ul>
<p><strong>Team Function</strong></p>
<p>The Direct modeling team is focused on creating models to guide enterprise marketing decision that will help to promote brand awareness as well as boost sales through direct channel.</p>
<p><strong>Role Description:</strong></p>
<p>This position plays a crucial role in the data ecosystem by iteratively transforming raw data into structured, high-quality datasets that are ready for analysis in partnership with data/decision scientists. The role primarily focuses on moderately complex business problems while receiving limited coaching and guidance from data leadership. The role combines the technical skills of a data engineer, the analytical mindset of a data analyst, and strong business acumen to ensure data is not only collected and stored efficiently but also made accessible and insightful for end users. In partnership with data/decision scientists, the position is responsible for end-to-end data workflow including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization. This position requires deep understanding of data engineering, business processes, and analytics principles as well as a proactive approach to solving complex data challenges.</p>
<p><strong>Essential Job Functions:</strong></p>
<p><strong>1) Data infrastructure development</strong>: Pipeline Design and Development; Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as dbt (Data Build Tool), Apache Airflow, or similar. Automates data ingestion processes from various sources including databases, APIs, and third party services. Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery. Optimizes storage solutions for performance, cost efficiency, and scalability.</p>
<p><strong>2) Data modeling and transformation:</strong> Data Modeling - Develops and maintains logical and physical data models to support business analytics. Creates and manages dimensional models, star/snowflake schemas, and other data structures. Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages. Implements data transformation workflows to handle data cleansing, normalization, and enrichment. Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data. Implements data quality monitoring and alerting mechanisms.</p>
<p><strong>3) Collaboration and stakeholder management:</strong> Cross-Functional Collaboration - Works closely with data analysts, data scientists, and business stakeholders to gather requirements and understand their data needs. Acts as a liaison between technical teams and business units to translate business requirements into technical specifications. Technical Communication - Clearly communicates complex technical concepts and data insights to non-technical stakeholders. Provides training and support to team members on data tools, best practices, and methodologies.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Over 4 years of experience in data development and analytics engineering using Python, SQL, DBT and Snowflake.</li>
<li>Bachelor’s degree in Computer Science, Data Science, Engineering or other Math or Technology related degrees.</li>
<li>Fluency in English</li>
</ul>
<p><strong>Software / Tools</strong></p>
<ul>
<li>SQL (must have)</li>
<li>Python (must have)</li>
<li>Snowflake (must have)</li>
<li>DBT (must have)</li>
</ul>
<p><strong>Other Critical Skills</strong></p>
<ul>
<li>Data Transformation</li>
<li>Data Quality Assurance</li>
<li>Pipeline Design and Development</li>
<li>Technical Communication</li>
<li>Independent work</li>
<li>Orientation to detail</li>
</ul>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Snowflake, DBT, Data Transformation, Data Quality Assurance, Pipeline Design and Development, Technical Communication, Independent work, Orientation to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with a diverse collective of nearly 350,000 strategic and technological experts across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/htNwC3gPnBQ9oxedafiBav/remote-fbs-analytics-engineer-in-mexico-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3487a0dd-b87</externalid>
      <Title>Associate, Data Engineer</Title>
      <Description><![CDATA[<p><strong>Associate, Data Engineer at BlackRock</strong></p>
<p>About this role</p>
<p>BlackRock is looking for a data engineer to join the Digital Data Engineering team. In this role, you will help develop data integrations between BlackRock’s internal data systems and our external marketing technology platforms. You will work with business partners to develop data structures, build ETL pipelines, and implement appropriate data governance and monitoring.</p>
<p>As part of BlackRock’s Digital organization, this role supports our mission to create AI-enabled, personalized and scalable marketing experiences. You will build the data foundations that power next generation digital platforms, audience personalization, and intelligent activation across a global ecosystem.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable data pipelines that support AI-enabled digital experiences, personalization, and marketing automation.</li>
<li>Leverage AI-driven development and testing tools to increase engineering quality, speed, and reliability.</li>
<li>Contribute to ongoing platform modernization efforts across Martech, content, analytics, and web ecosystems.</li>
<li>Collaborate with cross-functional stakeholders to ensure data is structured and governed in ways that accelerate downstream personalization and analytics use cases.</li>
<li>Architect and develop data solutions to bring new datasets into digital ecosystem including Private Markets data and product data.</li>
</ul>
<p><strong>Core Skills</strong></p>
<ul>
<li>You have flawless written and verbal communication and ability to gain buy-in on plans from a non-technical audience</li>
<li>You have experience working with a broad set of stakeholders, including non-technical and non-quantitative people.</li>
<li>You are comfortable using AI tools to enhance development workflows, such as prototyping, testing, documentation, and data validation.</li>
<li>You have a strong desire to develop creatively and promote innovation.</li>
<li>You&#39;re self-motivated and able to think big while also taking direction and feedback.</li>
<li>You have excellent teamwork and collaboration skills.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>3+ years’ experience in in SQL and Python, with experience in both RDBMS and Big Data structures. Existing experience with Snowflake-specific concepts is desirable.</li>
<li>Familiarity with using AI-assisted development tools (e.g., code generation, code review, unit test development) to improve quality and delivery efficiency.</li>
<li>ETL and pipeline development experience with Airflow and DBT is a plus.</li>
<li>CI/CD experience with Azure and understanding of API frameworks is a plus.</li>
<li>B.S. / M.S. degree in Computer Science, Engineering, or a related discipline.</li>
<li>Knowledge of Marketing technology platforms is desirable, but not required (e.g., Eloqua/Marketo, web analytics platforms, customer data platforms).</li>
<li>Relentless desire for understanding how processes work. Creativity in solving unconventional problems.</li>
<li>Adaptability and resiliency when overcoming challenges.</li>
</ul>
<p><strong>Our benefits</strong></p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p><strong>Our hybrid work model</strong></p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p><strong>About BlackRock</strong></p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, RDBMS, Big Data structures, Snowflake-specific concepts, AI-assisted development tools, code generation, code review, unit test development, ETL and pipeline development, Airflow, DBT, CI/CD experience, Azure, API frameworks, B.S. / M.S. degree in Computer Science, Engineering, or a related discipline, Marketing technology platforms, Eloqua/Marketo, web analytics platforms, customer data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dgM5uMjA3xyRYgwF3u3x72/associate%2C-data-engineer-in-budapest-at-blackrock?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>18528dac-ae1</externalid>
      <Title>Threat Collections Engineer</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Threat Collections Engineer to join our Threat Intelligence team. In this role, you will build the infrastructure that powers our threat discovery capabilities—integrating external data sources, developing detection systems for automated lead generation, and creating internal tooling that scales our investigators&#39; impact.</p>
<p>This is a foundational engineering role on a small, high-impact team. You will take projects from proof-of-concept to production, work closely with investigators to understand their needs, and help scale what may become a multi-person collections function.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build automated detection systems that use disparate signals to identify abusive behaviour.</li>
<li>Take systems from idea to proof-of-concept to production-grade with appropriate monitoring, documentation, and maintenance processes</li>
<li>Develop and maintain YARA rule infrastructure, including tools for writing, validating, and testing rules against real data</li>
<li>Create integrations with external threat intelligence platforms (e.g. VirusTotal, Censys, Urlscan) via MCP servers to enable multi-source correlation during investigations</li>
<li>Build data pipelines that ingest intelligence from RSS feeds, CTI news sources, and partner sharing, using Claude to extract TTPs and generate targeted hunting queries</li>
<li>Develop behavioural analytics capabilities using DBT-based frameworks and create searchable audit logging infrastructure</li>
<li>Establish feedback loops with investigators to tune detection systems and reduce false positives</li>
<li>Scrape and normalise data from external sources to feed threat detection and enrichment workflows</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have strong coding proficiency in Python and SQL for building detection logic, data pipelines, and automation</li>
<li>Have experience with data pipeline orchestration tools (Airflow, DBT, or similar)</li>
<li>Have familiarity with threat intelligence concepts including IOCs, YARA rules, and threat correlation techniques</li>
<li>Have experience integrating external APIs and building data ingestion systems</li>
<li>Can translate investigator needs and workflows into technical requirements</li>
<li>Are comfortable building v0 systems and iterating based on user feedback</li>
<li>Have strong communication skills for working closely with non-engineering stakeholders</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience with threat intelligence sharing frameworks (e.g. MISP, STIX/TAXII)</li>
<li>Background in cyber threat intelligence, security operations, or abuse detection</li>
<li>Experience building MCP servers or similar tool integrations for AI systems</li>
<li>Familiarity with web scraping and data extraction at scale</li>
<li>Experience with behavioural analytics or anomaly detection systems</li>
<li>Understanding of LLM capabilities and how to leverage them for automation</li>
<li>A Top Secret Clearance</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.** Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</strong></p>
<p><strong>Your safety matters to us.** To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics as it does with computer science.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$300,000 - $320,000 USD</Salaryrange>
      <Skills>Python, SQL, Airflow, DBT, YARA rules, Threat intelligence, API integration, Data ingestion, Web scraping, Data extraction, MISP, STIX/TAXII, Cyber threat intelligence, Security operations, Abuse detection, LLM capabilities, Automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5074937008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA, Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d6450ee6-847</externalid>
      <Title>Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.</p>
<p>A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.</p>
<p>Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.</p>
<p><strong>Sample projects include...</strong></p>
<ul>
<li>A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.</li>
</ul>
<ul>
<li>A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.</li>
</ul>
<ul>
<li>Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.</li>
</ul>
<ul>
<li>Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.</li>
</ul>
<ul>
<li>Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.</li>
</ul>
<p><strong>What we&#39;re looking for</strong></p>
<p>We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.</p>
<p>Strong signals include:</p>
<ul>
<li>Deep experience with Spark (Databricks or open-source Spark both count)</li>
</ul>
<ul>
<li>Production experience with Ray Data</li>
</ul>
<ul>
<li>Hands-on ownership of large data pipelines and storage systems</li>
</ul>
<ul>
<li>Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers</li>
</ul>
<ul>
<li>Clear thinking about data modeling and long-term maintainability</li>
</ul>
<ul>
<li>You have good judgment about when to patch and when to rebuild</li>
</ul>
<p>Nice to have</p>
<ul>
<li>Experience running or scaling ClickHouse</li>
</ul>
<ul>
<li>Familiarity with dbt, Dagster, or similar orchestration and modeling tools</li>
</ul>
<p>We&#39;re in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Spark, Ray Data, data pipelines, storage systems, debugging performance issues, data modeling, long-term maintainability, ClickHouse, dbt, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology company that ships daily releases, leaving behind signals that power model improvement, evals, and experimentation. The company has multiple offices in North Beach, San Francisco and Manhattan, New York.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-data-infrastructure?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>85b77752-f3d</externalid>
      <Title>Data Scientist</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>As an early member of Cursor&#39;s data team, you&#39;ll help build an AI data program operating at incredible scale while partnering directly with founders and area leads. You&#39;ll work hands-on across the entire data stack and at the bleeding edge of AI, turning billions of user-AI interactions into strategy that gets users to &#39;aha&#39; faster and expands their usage.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Partner with area leads to shape data-informed decisions across the business.</li>
<li>Define, track, and own metrics like power usage and user satisfaction that multiple teams and leadership depend on.</li>
<li>Run experiments end-to-end: design, analyse, and translate into clear product recommendations.</li>
<li>Build pipelines, dashboards, and analyses that make self-serve insights accessible and trustworthy.</li>
<li>Establish data culture and foundations as an early member of the data team.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have at least 2-4 years of full-time data science experience.</li>
<li>You have a strong track record of shipping high-impact work when operating in ambiguity.</li>
<li>You can turn complex data into clear insights and stories for engineers, PMs, and execs.</li>
<li>You&#39;ve worked at a hyper-growth startup or research org—you know how to be scrappy and ship insights across multiple product areas.</li>
<li>You&#39;re fluent in SQL, Python, and AB testing, and can write pipelines to unblock yourself.</li>
</ul>
<p><strong>Bonus points if</strong></p>
<ul>
<li>You have hands-on experience with dbt.</li>
<li>You have experience working on productivity software or AI tooling.</li>
</ul>
<p>Name<em> Email</em> ↥ Upload file LinkedIn URL GitHub Profile</p>
<p>Please write a short note on a project you&#39;re proud of:</p>
<p>Will you now or in the future require visa sponsorship to work in the country where this position is located?</p>
<p>Has someone at Cursor referred you for this role? If so, please include their email here</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, AB testing, dbt, productivity software, AI tooling, data science, data analysis, data visualisation, machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology company that operates at incredible scale. It has a data team that is building an AI data program.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/data-scientist-agents?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>366de878-041</externalid>
      <Title>Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As one of Cursor&#39;s first Analytics Engineers, you&#39;ll work hands-on across the entire stack to build data products and drive strategic decisions across product, GTM, and research. You&#39;ll partner directly with founders and area leads on critical questions, collaborating with stakeholders who are eager to jump into SQL and dbt. Through this collaboration, you&#39;ll pioneer the next frontier of data: defining how Cursor itself transforms data science by building a data stack around Cursor Agent for self-serve analytics.</p>
<ul>
<li>Read our blogpost on measuring the impact of Semantic Search: https://cursor.com/blog/semsearch</li>
</ul>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Partner with area leads in Finance, Growth, Product, and Agent Quality to understand their data needs and build foundational datasets.</li>
<li>Up-level our data stack by evaluating new tooling and AI integrations, while partnering with Data Infra and product engineers to maximise the impact of existing tooling.</li>
<li>Ensure the quality and reliability of data in our warehouse.</li>
<li>Help guide a vibrant self-serve data culture to make self-serve insights accessible and trustworthy.</li>
<li>Establish data culture and foundations as an early member of the data team and our first analytics engineer.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have at least <strong>4+ years</strong> of full-time analytics engineering experience.</li>
<li>You&#39;ve been an early data member at a hyper-growth startup or research org. You know how to scale data from 10 to 50 data scientists.</li>
<li>You&#39;ve optimised queries for speed and cost on datasets that grow by billions of rows per day.</li>
<li>You can write SQL and Python in your sleep.</li>
<li>You care deeply about accuracy and detail.</li>
<li>You&#39;re excited about the modern data stack and self-serve data.</li>
<li>You&#39;re excited to build data products end to end, even if it requires going outside the original job description.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, dbt, Data Infra, AI integrations, Modern data stack, Self-serve data, Data culture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a data organisation that builds data products and drives strategic decisions across product, GTM, and research. It has a team of uniquely data-savvy stakeholders who are eager to jump into SQL and dbt.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/data-engineer-analytics?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>63e3e256-1a7</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Senior Data Engineer</strong></p>
<p><strong>Location</strong></p>
<p>London</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>CommercialRevenue Operations</p>
<p>Synthesia is the world&#39;s leading AI video platform for business, used by over 90% of the Fortune 100. Founded in 2017, the company is headquartered in London, with offices and teams across Europe and the US.</p>
<p>As AI continues to shape the way we live and work, Synthesia develops products to enhance visual communication and enterprise skill development, helping people work better and stay at the centre of successful organisations.</p>
<p>Following our recent Series E funding round, where we raised $200 million, our valuation stands at $4 billion. Our total funding exceeds $530 million from premier investors including Accel, NVentures (Nvidia&#39;s VC arm), Kleiner Perkins, GV, and Evantic Capital, alongside the founders and operators of Stripe, Datadog, Miro, and Webflow.</p>
<p><strong>Senior Data Engineer</strong></p>
<p>We&#39;re hiring a Senior Data Engineer to join Synthesia and take ownership of our core data systems. You&#39;ll be responsible for designing and maintaining scalable pipelines, optimising data models, and ensuring high data quality and governance standards.</p>
<p><strong>What you&#39;ll do at Synthesia:</strong></p>
<ul>
<li>Architect and scale robust, end-to-end data pipelines that ingest and transform complex semi-structured and structured data into our Snowflake data warehouse.</li>
</ul>
<ul>
<li>Own the evolution of our dbt project - implementing modular modelling patterns and other best practices to ensure a &#39;single source of truth&#39; for the entire organisation.</li>
</ul>
<ul>
<li>Manage platform infrastructure in snowflake, AWS and other tools.</li>
</ul>
<ul>
<li>Continuously optimise warehouse performance and cost by diagnosing bottlenecks, tuning inefficient queries, and improving how compute resources are used as we scale.</li>
</ul>
<ul>
<li>Bridge the gap between experimental data science workflows and production, building the infrastructure and orchestration needed to deploy and monitor batch ML jobs.</li>
</ul>
<ul>
<li>Drive best practices in data security, governance, and compliance, particularly with regards to AI.</li>
</ul>
<ul>
<li>Partner with cross-functional stakeholders to understand data requirements and translate them into technical solutions.</li>
</ul>
<p><strong>What we&#39;re looking for:</strong></p>
<ul>
<li>5+ years of experience as a Data Engineer or in a closely related role, with a proven track record of building and operating production data systems.</li>
</ul>
<ul>
<li>Experience working in an early-stage or scaling data function. You&#39;re comfortable taking ownership and wearing multiple hats when needed.</li>
</ul>
<ul>
<li>Strong foundations in software engineering and data modelling best practices, with an ability to design systems that are maintainable, scalable, and easy for others to build on.</li>
</ul>
<ul>
<li>Deep expertise in SQL, and solid experience using Python or similar languages to build data pipelines, tooling, and orchestration (Airflow).</li>
</ul>
<ul>
<li>Hands on experience managing cloud infrastructure using infrastructure-as-code (e.g. Terraform) on AWS, GCP, or similar platforms.</li>
</ul>
<ul>
<li>A pragmatic approach to data platform design, with an eye for performance, cost efficiency, and operational reliability.</li>
</ul>
<ul>
<li>Excellent communication skills: you can work effectively with technical and non-technical stakeholders to gather requirements, explain trade-offs and communicate data team needs.</li>
</ul>
<ul>
<li>A product-oriented mindset, with an understanding of how data can shape decision making and accelerate company growth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Airflow, Terraform, AWS, GCP, Snowflake, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.io.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. The company is headquartered in London, with offices and teams across Europe and the US.</Employerdescription>
      <Employerwebsite>https://www.synthesia.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/46650970-494a-4d4b-ab4b-75c2a3b06daf?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>438dbd4f-1a6</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>Senior Analytics Engineer</strong></p>
<p><strong>About the role</strong></p>
<p>As an Analytics Engineer, you&#39;ll be a key early member of our data function, responsible for building and evolving the analytics foundations that power product decision-making across the company. You&#39;ll work closely with Product, Analytics, and Engineering to turn raw product data into trusted, well-defined datasets, metrics, and data products that scale with the business.</p>
<p><strong>What you&#39;ll be doing</strong></p>
<ul>
<li>Partner with Product, Analytics, and Engineering to understand data needs and translate ambiguous questions into clear, scalable data models</li>
</ul>
<ul>
<li>Define, build, and maintain core dbt models that transform raw product data into canonical, well-documented datasets</li>
</ul>
<ul>
<li>Own metric definitions and transformation logic to ensure consistency, accuracy, and trust across reporting and analysis</li>
</ul>
<ul>
<li>Establish and uphold data quality standards, testing, and expectations around freshness and reliability</li>
</ul>
<ul>
<li>Work closely with Product Analysts to enable faster, higher-quality insights and decision-making</li>
</ul>
<ul>
<li>Support data consumption in tools like Amplitude and Omni, ensuring data is intuitive and easy to self-serve</li>
</ul>
<ul>
<li>Act as a subject-matter expert for analytics engineering, guiding best practices and helping others solve data problems</li>
</ul>
<ul>
<li>Contribute to shaping the future direction of our data stack as product complexity and scale increase</li>
</ul>
<p><strong>About the setup</strong></p>
<ul>
<li>⚒️ Stack: dbt, Snowflake, Amplitude, Omni</li>
</ul>
<ul>
<li>🌱 Early, high-impact role with real ownership over the analytics layer</li>
</ul>
<ul>
<li>🤝 Highly collaborative environment with product- and data-savvy stakeholders</li>
</ul>
<ul>
<li>🚀 Outcome-focused team where pragmatism and impact matter more than process</li>
</ul>
<p><strong>We&#39;d love to hear from you if</strong></p>
<ul>
<li>You have 6+ years of experience in analytics engineering or data engineering, ideally in product-led or high-growth environments</li>
</ul>
<ul>
<li>You have strong hands-on experience with dbt and enjoy designing modular, scalable, and well-tested data models</li>
</ul>
<ul>
<li>You write advanced, performant, and maintainable SQL</li>
</ul>
<ul>
<li>You can translate business and product requirements into robust data pipelines and metrics</li>
</ul>
<ul>
<li>You have a strong product mindset and understand how data and metrics influence product direction</li>
</ul>
<ul>
<li>You&#39;re comfortable operating across the stack and taking ownership end to end when needed</li>
</ul>
<ul>
<li>You care deeply about data quality, clarity, and trust</li>
</ul>
<ul>
<li>You&#39;re outcome-driven and can clearly articulate the impact your work has had on teams or the business</li>
</ul>
<p><strong>Our culture</strong></p>
<p>At Synthesia we&#39;re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions.</p>
<p><strong>The good stuff...</strong></p>
<ul>
<li>A hybrid or remote-friendly environment for candidates based in Europe. You can work fully remote if you&#39;re not local to an office or hybrid from London, Amsterdam, Munich, Zurich or Copenhagen offices.</li>
</ul>
<ul>
<li>A competitive salary + stock options</li>
</ul>
<ul>
<li>25 days of annual leave + public holidays (plus the option to take 5 days unpaid leave and carry 5 days over)</li>
</ul>
<ul>
<li>You will join an established company culture with optional regular socials and company retreats</li>
</ul>
<ul>
<li>Paid parental leave entitling primary caregivers to 16 weeks of full pay, and secondary 5 weeks of full pay</li>
</ul>
<ul>
<li>You can participate in a generous recruitment referral scheme if you help us to hire</li>
</ul>
<ul>
<li>The equipment you need to be successful in your role</li>
</ul>
<p>_You can see more about who we are and how we work here:_ _https://www.synthesia.io/careers_</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>A competitive salary + stock options</Salaryrange>
      <Skills>dbt, Snowflake, Amplitude, Omni, SQL, data engineering, product-led environments, modular data models, scalable data models, well-tested data models, data quality, data clarity, data trust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synthesia</Employername>
      <Employerlogo>https://logos.yubhub.co/synthesia.io.png</Employerlogo>
      <Employerdescription>Synthesia is the world&apos;s leading AI video platform for business, used by over 90% of the Fortune 100. The company is headquartered in London, with offices and teams across Europe and the US.</Employerdescription>
      <Employerwebsite>https://www.synthesia.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/synthesia/c11f83bc-46db-4c7b-a2ea-38f2ace507ba?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>