<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>46f577e7-522</externalid>
      <Title>Staff Data Analyst</Title>
      <Description><![CDATA[<p>Honor Technology&#39;s mission is to change the way society cares for older adults. As a leader in aging care innovation, Honor provides the technology, tools, and services that empower older adults to live life on their own terms.</p>
<p>We&#39;re looking for a Staff Data Analyst to join our team. This role reports to the VP of Data and joins a team of five other analysts collaborating closely with stakeholders across the entire organization. We&#39;re looking for someone who is excited to jump into new problems and make an impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Solve problems that have real-world impact</li>
<li>Thrive in diverse, cross-functional environments, collaborating with partners across design, product, engineering, and operations</li>
<li>Live at the intersection of software and the real world, whether that&#39;s optimizing complex operational problems or tracing the lineage of a key metric through a dozen transformations</li>
<li>Share knowledge, mentor others, and contribute to a healthy, inclusive team culture</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of professional analytics experience with a track record of owning analytics systems and solving applied business problems</li>
<li>Strong stakeholder management: you are comfortable translating business needs into concrete requirements and communicate tradeoffs clearly</li>
<li>Excellent written and verbal communication skills</li>
<li>Deeply experienced with our analytics stack (Git, Fivetran, Redshift, DBT, Looker) or equivalent tools (and a desire to learn new ones!)</li>
<li>Passion about using your data intuition to navigate a sea of messy data, generate hypotheses, and implement solutions that directly impact the business</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Base pay is just a part of our total rewards program</li>
<li>Honor offers generous equity packages that increase with position level and responsibilities</li>
<li>A 401K with up to a 4% employer match</li>
<li>Medical, dental, and vision coverage including zero-cost plans for employees</li>
<li>Short-term disability, long-term disability, and life insurance are fully employer-paid with a voluntary additional life insurance option</li>
<li>A generous time-off program, mental health benefits, wellness program, and discount program</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$148,500-$165,000 USD</Salaryrange>
      <Skills>Git, Fivetran, Redshift, DBT, Looker, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults to live life on their own terms. It has a global franchise network and over 100,000 Care Pros.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8451598002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9faf3487-9d2</externalid>
      <Title>Data Analytics/Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>We are seeking passionate and talented Data/Analytics Engineers to join our team. In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions. Work closely with machine learning teams to support model training, deployment pipelines, and feature stores.</li>
</ul>
<ul>
<li>Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.</li>
</ul>
<ul>
<li>Define and enforce data governance, metadata management, and data lineage standards.</li>
</ul>
<ul>
<li>Ensure data integrity, security, and compliance with industry standards.</li>
</ul>
<p>About You</p>
<ul>
<li>Master’s degree in Computer Science, Engineering, Statistics, or a related field.</li>
</ul>
<ul>
<li>3+ years of experience in data engineering, analytics engineering, or a related role.</li>
</ul>
<ul>
<li>Proficiency in Python and SQL.</li>
</ul>
<ul>
<li>Experience with dbt.</li>
</ul>
<ul>
<li>Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with attention to detail.</li>
</ul>
<ul>
<li>Ability to communicate complex data concepts to both technical and non-technical stakeholders.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience with machine learning pipelines, MLOps, and feature engineering.</li>
</ul>
<ul>
<li>Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Background in building self-service data platforms for analytics and AI use cases.</li>
</ul>
<p>Hiring Process</p>
<ul>
<li>Intro call with Recruiter - 30 min</li>
</ul>
<ul>
<li>Hiring Manager Interview - 30 min</li>
</ul>
<ul>
<li>Technical interview - Live Coding (Python/SQL) - 45 min</li>
</ul>
<ul>
<li>Technical interview - System Design - 45 min</li>
</ul>
<ul>
<li>Value talk interview - 30 mins</li>
</ul>
<ul>
<li>References</li>
</ul>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Health insurance</li>
</ul>
<ul>
<li>Transportation allowance</li>
</ul>
<ul>
<li>Sport allowance</li>
</ul>
<ul>
<li>Meal vouchers</li>
</ul>
<ul>
<li>Private pension plan</li>
</ul>
<ul>
<li>Generous parental leave policy</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and equity package</Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse, Machine learning pipelines, MLOps, Feature engineering, Containerization, Orchestration, DevOps, CI/CD pipelines, Infrastructure-as-code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that designs and develops high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions. The company&apos;s comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7e078ceb-e9a</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>
<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>
<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>
<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>
<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>
<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>
<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>
<li>Enhance and maintain the DevOps capabilities of the data platform.</li>
<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>
<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>
<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>
<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>
<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>
<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>
<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>
<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>
<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>
<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>
<li>4 years of professional experience in:</li>
<li>Data engineering, data product development and software product launches</li>
<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>
<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>
<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>
<li>Workflow orchestration tools like Airflow.</li>
<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>
<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>
<li>Microservices architecture to deliver large-scale real-time data processing application.</li>
<li>REST APIs for compute, storage, operations, and security.</li>
<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>
<li>Project management tools like Atlassian JIRA.</li>
</ul>
<p><strong>Even better if you have...</strong></p>
<ul>
<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>
<li>2 years of experience with ML Model Development and/or MLOps.</li>
<li>Committed code to improve open-source data/software engineering projects</li>
<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>
<li>GCP Professional Certifications.</li>
<li>Demonstrated passion to mine raw data and realize its hidden value.</li>
<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>
<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>
<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>
<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>
<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>
</ul>
<p>Experience Level: mid</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>This position is a range of salary grades 6-8.</Salaryrange>
      <Skills>Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is an American multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7ec15bc8-951</externalid>
      <Title>GTM Engineer</Title>
      <Description><![CDATA[<p>You&#39;ll build the AI-native GTM systems and data infrastructure that turn product usage signals into enterprise sales opportunities. Gamma&#39;s PLG flywheel generates enormous engagement data across millions of users. Your job is to create the systems that identify which accounts should talk to sales, when they&#39;re ready, and why.</p>
<p>This is a 0-to-1 role at the intersection of data, product, and revenue. You&#39;ll build Product Qualified Lead identification systems, design AI-powered lead scoring models, and implement data pipelines that give sales and customer success real-time visibility into engagement and expansion signals. You&#39;ll partner with Product and Data teams to instrument tracking, ensure data quality, and continuously improve how we identify and convert high-intent accounts.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build Product Qualified Lead (PQL) identification systems that surface enterprise buying signals based on team expansion, engagement, feature adoption, and company attributes</li>
<li>Build AI agents for automated account research using LLM APIs to analyze company websites, news, funding events, and tech stacks, generating personalized talking points for sales</li>
<li>Design and implement data pipelines from product usage data to HubSpot, enabling sales and CS teams to see real-time engagement, usage trends, and expansion signals</li>
<li>Create AI-powered lead scoring models combining product behavior, firmographics, and engagement patterns to predict conversion likelihood</li>
<li>Build dashboards and reporting that give sales, CS, and leadership visibility into account health, product adoption, expansion opportunities, and churn risk</li>
<li>Implement reverse ETL infrastructure using tools like Census, Hightouch, or custom solutions to ensure product data flows seamlessly into GTM systems</li>
</ul>
<p><strong>What you&#39;ll bring</strong></p>
<ul>
<li>3–5 years of experience in a GTM Engineer, Growth Engineer, Revenue Ops, or Analytics Engineering role at a PLG B2B SaaS company</li>
<li>Strong technical foundation in Python and SQL with experience building data pipelines, ETL/reverse ETL workflows, and integrating product data with GTM systems like HubSpot or Salesforce</li>
<li>API integration expertise with experience building workflows using tools like n8n, Zapier, Make, or Tray.io</li>
<li>Deep understanding of PLG metrics with the ability to operationalize activation, engagement, and expansion signals, and a track record of building systems (PQL models, AI agents, predictive analytics) that drove measurable pipeline or revenue</li>
<li>Scrappy builder mindset with the judgment to balance custom builds versus off-the-shelf tools, ideally with experience helping build early data systems fueling a PLG-to-enterprise transition</li>
<li>Data warehouse experience (Snowflake, BigQuery, Redshift) and familiarity with dbt or similar transformation tools (Nice to have)</li>
<li>Production machine learning experience building, deploying, and monitoring predictive models (Nice to have)</li>
</ul>
<p><strong>Compensation range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $170K - $215K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170K - $215K</Salaryrange>
      <Skills>Python, SQL, data pipelines, ETL/reverse ETL workflows, API integration, n8n, Zapier, Make, Tray.io, PLG metrics, predictive analytics, dbt, Snowflake, BigQuery, Redshift, production machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a PLG B2B SaaS company.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/e068135f-9816-4e5d-bd93-6464f314c57a</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>61234903-9fa</externalid>
      <Title>Engineering Manager (Java or Typescript) - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>
<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>
<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>
<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>
<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>
</ul>
<ul>
<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>
</ul>
<ul>
<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>
</ul>
<ul>
<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>
</ul>
<ul>
<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>
</ul>
<ul>
<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>
</ul>
<ul>
<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>
</ul>
<ul>
<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>
</ul>
<ul>
<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>
</ul>
<ul>
<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>
</ul>
<ul>
<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>
</ul>
<ul>
<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>
</ul>
<ul>
<li>Take full responsibility for delivering impactful features to millions of users annually.</li>
</ul>
<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience building and implementing backend services and/or frontend applications.</li>
</ul>
<ul>
<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>
</ul>
<ul>
<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>
</ul>
<ul>
<li>Love for building world-class products with a great user experience.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search and booking services for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/1558189</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad717304-da7</externalid>
      <Title>Intern Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>
<p>This internship provides a great opportunity to gain hands-on experience into Data Analytics. You will work alongside a team of highly skilled and dedicated professionals who are committed to offering strong mentorship and guidance to help you start your career in the field of data.</p>
<p>Duration: 6 months. Location: Munich, 2-3 office days per week.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>
<li>Data Pipelines: Airflow, DBT.</li>
<li>Data Visualization: Looker.</li>
<li>Data Analytics: SQL, Python.</li>
<li>Collaboration: Git, Atlassian.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Analytics Intern at Holidu, you’ll help our company make smarter, data-driven decisions, while being supported by a Senior Analyst.</p>
<p>This role goes beyond building dashboards. We want curious, proactive people who want to become data advisors - not only delivering reports, but understanding the business context, which questions they answer and why they matter.</p>
<ul>
<li>Collect, analyse, and interpret large datasets to help solve real business challenges.</li>
<li>Build dashboards and reports using tools like SQL, Python, and Looker.</li>
<li>Collaborate closely with teams such as Product, Marketing, or Finance to help them extract actionable insights from data.</li>
<li>Build and improve data pipelines using cutting-edge technologies.</li>
<li>We are an AI-first team. Rather than manually executing repetitive tasks, you will use AI to work smarter and automate workflows.</li>
<li>You’ll collaborate with our Data Scientists and get exposure to:</li>
<li>Data preparation and exploratory data analysis.</li>
<li>How ML-models are built, evaluated, and deployed in real-life.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>Currently enrolled in or recently completed a Bachelor’s or Master’s degree in a quantitative field (e.g., Business Analytics, Data Science, Economics, Statistics, Mathematics, Engineering or similar).</li>
<li>Understanding of SQL and Python, proficiency in Excel/Google Sheets and a desire to learn visualization tools like Looker.</li>
<li>Knowledge of Machine Learning and Statistical models is a plus.</li>
<li>Strong analytical and problem-solving skills, and attention to detail.</li>
<li>Curiosity to learn and a passion for solving data problems.</li>
<li>Good communication and presentation skills.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Compensation: Get a fair salary.</li>
<li>Impact: Make a difference for hundreds of thousands of monthly users.</li>
<li>Growth: Take responsibility from day one and develop through regular feedback.</li>
<li>Community: Engage with international, diverse, yet like-minded colleagues through regular events and 2 office days per week with your team.</li>
<li>Flexibility: Benefit from our hybrid work policy and the chance to work from other local offices for up to 8 weeks a year.</li>
<li>Fitness: Get a Urban Sports Club corporate subscription or a premium gym membership at a discounted rate.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>internship</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Looker, Git, Atlassian, Airflow, DBT, AWS Stack, Redshift, Athena, Glue, S3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search and recommendation services for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2556233</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6690b2fa-cab</externalid>
      <Title>(Senior) Team Lead Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>
<p>This role is on-site in Munich, with two office days per week.</p>
<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>
<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>
<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>
<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>
<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>
<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>
<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>
<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack</Skills>
      <Category>Technology</Category>
      <Industry>Travel Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2598226</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477d343e-e37</externalid>
      <Title>Customer Success Architect</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>
<p>About the Customer Success Team:</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role:</p>
<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>
<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>
<p>Responsibilities:</p>
<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>
<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>
<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>
<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>
<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>
<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>
<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>
<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>
<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>
<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>
<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>
<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>
<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>
<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>
<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>
<p>Excellent written, analytical, and communication skills</p>
<p>Strong process and/or project delivery discipline</p>
<p>Eager to learn new technologies and adapt to evolving customer needs</p>
<p>We&#39;d Be Extra Excited For Someone Who Has</p>
<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Familiar with analytics best practices across business segments and verticals</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leading provider of digital analytics software, serving over 29,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7506821</Applyto>
      <Location>Bengaluru, India (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35458586-a42</externalid>
      <Title>Enterprise Architect, Finance &amp; Legal Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Enterprise Architect to join our Technology, Data and Intelligence team. As an Enterprise Architect, you will be responsible for defining and delivering the technology architecture strategy across Finance and Legal functions, enabling data-driven decision-making, automation, and operational excellence.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Defining the target-state architecture for Finance and Legal applications, ensuring alignment with enterprise strategy and growth objectives.</li>
<li>Leading the design and implementation of end-to-end architectural solutions for Finance and Legal systems, ensuring integration, scalability, and performance across the enterprise.</li>
<li>Developing and maintaining a multi-year roadmap for modernization across ERP, FP&amp;A, Legal, and Sales Compensation systems.</li>
<li>Ensuring systems are designed with identity-first security principles, integrating with Okta and other IAM solutions for authentication, authorization, and compliance.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>15+ years of software engineering experience, including significant time as an Architect or Principal in ERP Systems (Oracle/Netsuite/SAP), FP&amp;A Systems (Anaplan) and/or CLM systems (Aptus/Conga/Ironclad).</li>
<li>Excellent storytelling and communication skills,comfortable presenting to both technical and executive stakeholders.</li>
<li>Multiple ERP (Oracle or Netsuite) full cycle implementation experience.</li>
<li>Deep understanding of the Finance business process areas – Order to Cash, Record to Report, Source to Pay, Plan to Report (FP&amp;A), Treasury, Credit Collection, Revenue Recognition, and Subscription Billing, Contract Life Cycle Mgmt within Legal Ops.</li>
<li>Demonstrated hands-on experience architecting functional and technical solutions within major business applications, with specific expertise in NetSuite (or Oracle), Aptus/Conga (or IronClad), Anaplan, Coupa, Scout, Tax engines such as Avalara, Vertex or OneSource – including understanding their data models and APIs in context of solution development and integrations.</li>
<li>Architected and delivered AI Agents using leading LLMs Gemini, OpenAI or Claude.</li>
<li>Experience with managing a Software and/or Vendor selection keeping in view the end state architecture of the enterprise.</li>
<li>Proficient understanding of middlewares such as MuleSoft, Workato, Boomi, or Informatica for connecting Finance, Legal, CRM, and data platforms.</li>
<li>Familiar with code, configuration, and system performance standards/reviews to ensure quality, scalability, and compliance with enterprise standards.</li>
<li>Proficiency with AWS, Azure, or GCP, with knowledge of data lakes/warehouses (Snowflake, Redshift, BigQuery) for SaaS revenue and compliance analytics.</li>
<li>Identity &amp; Security: knowledge of SSO, OAuth, SAML, SCIM, and Zero Trust principles, with hands-on integration experience in Okta or similar IAM platforms.</li>
</ul>
<p>In addition to the above skills and experience, the ideal candidate will be passionate about innovation, AI adoption, and continuous improvement aligned with Okta’s mission to build secure, intelligent, and connected business systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$150,000 - $250,000 per year</Salaryrange>
      <Skills>Enterprise Architecture, Cloud Computing, Identity and Access Management, Security, Data Analytics, Machine Learning, Artificial Intelligence, Software Development, DevOps, Agile Methodologies, AWS, Azure, GCP, Snowflake, Redshift, BigQuery, MuleSoft, Workato, Boomi, Informatica</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorisation services to organisations.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7442186</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aeba45bc-3e4</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>
<p>Visit mixpanel.com to learn more.</p>
<p>About the Customer Success &amp; Solutions Engineering Team</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customer’s business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer’s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role</p>
<p>Our SEs are inquisitive, nimble, and able to clearly articulate the technical benefits and requirements of Mixpanel to developers and product managers, while also communicating the business value of our product to high-level executives. In your first month, you’ll become a Mixpanel expert,both in features and functionality as well as implementation. You’ll have the opportunity to shadow customer calls and demos with current Sales Engineers and Account Executives while learning to articulate our value proposition. You’ll also be trained on Mixpanel’s internal systems and tools to set you up for success.</p>
<p>Within your first three months, you’ll be directly involved in deal cycles with Commercial Account Executives. You’ll lead the technical qualification for customer use cases and deliver customized demos for prospects. You’ll work directly with leadership at the prospect’s organization to understand business challenges that can be solved through an analytics platform and consult on how Mixpanel can address those challenges to achieve a strong ROI. You’ll also work with the prospect’s business and technical teams to scope and execute proof-of-concept projects to establish Mixpanel’s value,including consulting on data ingestion methods, overall architecture, success criteria, and rollout strategies for analytics tools across an organization.</p>
<p>Responsibilities</p>
<p>Serve as a trusted technical advisor for prospects, providing strategic consultation on data architecture, governance, instrumentation, and business outcomes.</p>
<p>Communicate and consult effectively at all levels of the customer’s organization to earn trust and influence buying decisions.</p>
<p>Bridge the technical-business gap,working with senior stakeholders to define success for proof-of-concepts and ensuring successful execution and outcomes.</p>
<p>Leverage your Mixpanel expertise and technical/consultative skills to impart best practices throughout proof-of-concept projects.</p>
<p>Partner with Account Executives to drive revenue growth, serving as the key technical contact for customers.</p>
<p>Partner with post-sales teams to ensure that pre-sales value propositions translate into tangible post-sales results.</p>
<p>Develop relationships and uncover the needs of key technical stakeholders within your assigned book of business.</p>
<p>Be the “Voice of the Prospect” by collecting feedback from potential Mixpanel customers and sharing it with the Product team.</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>The ability to communicate with stakeholders at all levels,from discussing APIs with developers to organizational efficiency with CIOs.</p>
<p>A demonstrated track record of qualifying and selling technical solutions to executive stakeholders.</p>
<p>6+ years of experience in a Software-as-a-Service Sales Engineering or related role.</p>
<p>Experience in data querying, modeling, and transformation using tools such as SQL, dbt, Python, Business Intelligence platforms, or Product Analytics tools.</p>
<p>Familiarity with databases and cloud data warehouses (e.g., Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks).</p>
<p>A successful record of experience in sales engineering, customer success, client-facing professional services, consulting, or technical project management.</p>
<p>Excellent written, analytical, communication, and presentation skills.</p>
<p>Strong process and project delivery discipline.</p>
<p>The ability to travel.</p>
<p>Fluency in multiple languages; German preferred.</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>
<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, dbt, Python, Business Intelligence platforms, Product Analytics tools, Databases, Cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a software company that provides a digital analytics platform. It has over 29,000 customers and has raised $277M from prominent investors.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7407407</Applyto>
      <Location>London, UK (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b97b97d-56b</externalid>
      <Title>Solutions Engineer (pre-sales)</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel&#39;s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>As a Solutions Engineer (pre-sales) at Mixpanel, you will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer&#39;s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer&#39;s Mixpanel investment.</p>
<p>Responsibilities</p>
<ul>
<li>Support Sales Engineers and Account Executives in deal cycles, contributing to technical discovery and solution design</li>
<li>Deliver standard and semi-customized product demos to prospects</li>
<li>Assist in qualifying customer use cases and identifying opportunities where Mixpanel can provide value</li>
<li>Contribute to proof-of-concept projects, including setup, execution, and documentation of results</li>
<li>Provide guidance on implementation best practices, including instrumentation and data structure</li>
<li>Collaborate with internal teams (Sales, Product, Engineering, Support) to ensure a smooth customer experience</li>
<li>Build relationships with customer stakeholders and respond to technical questions</li>
<li>Capture and share customer feedback with internal teams to inform product improvements</li>
</ul>
<p>We&#39;re Looking For Someone Who Has</p>
<ul>
<li>Ability to communicate with both technical and non-technical stakeholders</li>
<li>Some experience supporting technical sales cycles, customer implementations, or consulting engagements</li>
<li>3+ years of experience in Sales Engineering, Customer Success, Solutions Consulting, or a related role</li>
<li>Working knowledge of data concepts such as SQL, event tracking, or analytics tools</li>
<li>Familiarity with databases or cloud data warehouses (e.g., Snowflake, BigQuery, Redshift)</li>
<li>Strong problem-solving skills with the ability to work on moderately complex, well-defined problems</li>
<li>Solid communication and presentation skills</li>
<li>Ability to manage multiple workstreams with guidance</li>
<li>Interest in learning and applying new technologies, including AI tools</li>
<li>Willingness to travel as needed</li>
</ul>
<p>Compensation</p>
<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>
<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors.</p>
<p>Mixpanel Compensation Range $170,000-$230,000 USD</p>
<p>Benefits and Perks</p>
<ul>
<li>Comprehensive Medical, Vision, and Dental Care</li>
<li>Mental Wellness Benefit</li>
<li>Generous Vacation Policy &amp; Additional Company Holidays</li>
<li>Enhanced Parental Leave</li>
<li>Volunteer Time Off</li>
<li>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</li>
</ul>
<p>Culture Values</p>
<ul>
<li>Make Bold Bets: We choose courageous action over comfortable progress.</li>
<li>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</li>
<li>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</li>
<li>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</li>
<li>Champion the Customer: We seek to deeply understand our customers&#39; needs, ensuring their success is our north star.</li>
<li>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</li>
</ul>
<p>Why choose Mixpanel?</p>
<p>We&#39;re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital. Mixpanel&#39;s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics. Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you&#39;ll be helping the world&#39;s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity. At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have. We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply. We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status. Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We&#39;ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170,000-$230,000 USD</Salaryrange>
      <Skills>SQL, event tracking, analytics tools, databases, cloud data warehouses, Snowflake, BigQuery, Redshift</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a digital analytics platform that helps companies understand user behavior and track company success metrics.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7800289</Applyto>
      <Location>San Francisco, US (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e4e06a1c-882</externalid>
      <Title>Senior Data Engineer, Air Dominance &amp; Strike</Title>
      <Description><![CDATA[<p>We&#39;re looking for ambitious, strategic engineers to build and accelerate every step of the way. As a Senior Data Engineer, you will be responsible for setting project impact, prioritization, and timelines. You will build and grow a team of engineers, lead by example, as we grow and scale our business and field operations. You will partner with senior leaders to bring structure, insight, and velocity to Anduril&#39;s highest-priority operational challenges.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing operational apps in a scrappy but effective way, working closely with users to iterate and support outcomes</li>
<li>Building systems and infrastructure that allows applications to mature, roll out, and scale in the field</li>
<li>Working closely with Anduril&#39;s corporate Analytics team to expand our ontologies and generalize our data and workflow applications across the entire company</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years experience in technical operating roles</li>
<li>Proof of delivery - you owned a project end-to-end from ambiguity to delivered, scalable outcome</li>
<li>Leader - You&#39;re skilled in leading through influence and diving into the details yourself but also have experience in building high-performing teams</li>
<li>Strong communicator - Demonstrated aptitude for strategic partnerships and communication with (non-)technical stakeholders</li>
<li>Engineering fundamentals - Computer Science, Engineering, Physics or similar background</li>
<li>You&#39;re deeply intellectually interested in the intersection of analytics and the real, physical, atoms-based, hardware world and are motivated by Anduril&#39;s mission</li>
<li>You&#39;re energized by business impact &amp; a self-starter: you&#39;d rather build an imperfect solution quickly that is used by many people than a perfect solution that collects dust</li>
<li>Coding experience - used Python, SQL, React, TypeScript or equivalent</li>
<li>Data system exposure: dbt, Redshift, Looker/Tableau, Palantir Foundry</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>Python, SQL, React, TypeScript, dbt, Redshift, Looker/Tableau, Palantir Foundry</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology to transform U.S. and allied military capabilities.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5111298007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>328a534b-bac</externalid>
      <Title>Customer Sales Director (Austin, TX)</Title>
      <Description><![CDATA[<p>We are looking for a Customer Sales Director to focus on an at-scale strategy to support, retain, and grow a mix of our Commercial and Enterprise customer base. This role is a hybrid-based role in Austin, Texas.</p>
<p>The ideal candidate will have 4+ years of experience in SaaS sales or account management, with a proven track record of exceeding targets. They will be able to build a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts, manage multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts</li>
<li>Managing multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas</li>
<li>Protecting renewals by monitoring account signals, deepening executive alignment, and helping customers realize consistent value</li>
</ul>
<p>The successful candidate will have strong consultative selling skills, engaging effectively with both technical and business audiences. They will be proactive and organized, capable of independently managing a diverse book of business.</p>
<p>Preferred qualifications include prior experience in analytics, ETL, BI, or open-source software, familiarity with dbt (core or Cloud) and the modern data stack, including platforms like Snowflake, BigQuery, Redshift, or Databricks, experience with consumption and/or usage-based pricing structures, and experience with the MEDD(P)ICC sales methodology / Command of the Message.</p>
<p>Benefits include unlimited vacation time, 401k plan with 3% guaranteed company contribution, comprehensive healthcare coverage, generous paid parental leave, flexible stipends for health &amp; wellness, home office setup, cell phone &amp; internet, learning &amp; development, and office space.</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS sales, account management, analytics, ETL, BI, open-source software, dbt, Snowflake, BigQuery, Redshift, Databricks, consumption and/or usage-based pricing structures, MEDD(P)ICC sales methodology / Command of the Message, prior experience in analytics, familiarity with dbt (core or Cloud), experience with consumption and/or usage-based pricing structures</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer in analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4616931005</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e40d534f-76a</externalid>
      <Title>Resident Architect</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>We&#39;re seeking an experienced Resident Architect (RA) with a passion for solving challenging problems with dbt to join our Professional Services team. RAs are billable to dbt Enterprise customers and help achieve our mission to empower data developers to create and disseminate organisational knowledge.</p>
<p>Responsibilities</p>
<ul>
<li>Work on a variety of impactful customer technical projects - inclusive of implementation, troubleshooting configurations, instilling best practices, and solutioning MVPs and long-term solutions to customer-specific requirements</li>
</ul>
<ul>
<li>Consult on architecture and design</li>
</ul>
<ul>
<li>Ensure our most strategic enterprise customers are adopting the product</li>
</ul>
<ul>
<li>Collaborate with other internal customer-facing teams at dbt Labs - Sales, Solution Architects, Training, Support</li>
</ul>
<ul>
<li>Provide critical feedback to dbt Labs product and engineering teams to improve and prioritise customer requests and ensure rapid resolution for engagement-specific issues</li>
</ul>
<ul>
<li>Become a product expert with dbt in the context of the modern data stack (if you aren&#39;t already)</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>4+ years&#39; experience working with technical data tooling, even better if it is in a customer-facing post-sales, technical architect or consulting role</li>
</ul>
<ul>
<li>Deep expertise in at least one data platform (Snowflake, Databricks, BigQuery, Redshift)</li>
</ul>
<ul>
<li>Experience using, deploying, or configuring dbt in an enterprise setting - working with dbt for minimum 1 year</li>
</ul>
<ul>
<li>Proficiency in writing SQL and Python in analytics contexts</li>
</ul>
<ul>
<li>You look forward to building skills in technical areas that support deployment and integration of dbt enterprise solutions to complete customer projects</li>
</ul>
<ul>
<li>Customer focus, embracing one of core values that users are our best advocates</li>
</ul>
<ul>
<li>Strong organisational skills with the ability to manage multiple technical projects simultaneously - including defining scope, tracking timelines, and ensuring deliverables are met</li>
</ul>
<ul>
<li>Clear and concise communicator with the ability to engage internal and external stakeholders, effectively explain complex technical or organisational challenges, and propose thoughtful, iterative solutions</li>
</ul>
<ul>
<li>The ability to thrive in a remote organisation that highly values transparency and cross-collaboration</li>
</ul>
<ul>
<li>Travel approximately 2-4x/year for customer onsite sessions, team offsites, and company events will be expected</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>You have obtained the dbt Analytics Engineering Certification</li>
</ul>
<ul>
<li>You have the ability to advise on dbt enterprise recommendations, and build direction/consensus with the customer to move forward</li>
</ul>
<ul>
<li>Experience with traditional Enterprise ETL tooling (Informatica, Datastage, Talend)</li>
</ul>
<p>Remote Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner</li>
</ul>
<ul>
<li>Hiring Manager Interview</li>
</ul>
<ul>
<li>Technical Task + Presentation</li>
</ul>
<ul>
<li>Team Interview</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is:</li>
</ul>
<p>$114,000 - $137,700</p>
<ul>
<li>The typical starting salary range for this role in the select locations listed is:</li>
</ul>
<p>$126,000 - $153,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,000 - $137,700</Salaryrange>
      <Skills>dbt, data platform, Snowflake, Databricks, BigQuery, Redshift, SQL, Python, analytics engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4627942005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b431957-4b8</externalid>
      <Title>Planning Analyst</Title>
      <Description><![CDATA[<p>The Planning organisation plays a critical role in driving operational outcomes by translating demand insights into precise production and procurement signals. Within this group, the Systems &amp; Process team partners closely with Anduril&#39;s Analytics team to build scalable workflows, ontologies, and data models that power planning excellence.</p>
<p>We&#39;re looking for a Planning Analyst who thrives at the intersection of digital and the physical hardware world. You&#39;ll design and implement data models, workflows, and analytical tools that directly improve operational performance,enabling Anduril to plan more effectively at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with Planning leadership and Anduril&#39;s Analytics team to build secure, timely, and extensible ontologies for domains such as Supply Chain, Manufacturing, and Finance.</li>
<li>Develop reporting and data applications that provide near real-time visibility into planning KPIs, including forecast accuracy, fill rates, and inventory turns.</li>
<li>Conduct ad-hoc analyses on demand signals, inventory strategies, and supply risk, delivering insights that balance service levels with profitability.</li>
<li>Run scenario planning and sensitivity analyses to evaluate the impact of market or contract volatility on operational outcomes.</li>
<li>Collaborate with Analytics engineers to generalise planning workflows and data products across Anduril&#39;s production and sustainment organisations.</li>
<li>Act as a subject matter expert for planning tools and systems, supporting upgrades, enhancements, and best practice adoption.</li>
<li>Become a trusted resource for leadership by helping them run their organisations more effectively through data-driven insights.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in an analytics-focused role (Analytics Engineer, Data Engineer, or Analyst/Consultant with strong supply chain or operations background).</li>
<li>Strong operational instincts and a track record of owning complex, ambiguous planning problems from start to finish.</li>
<li>Ability to think strategically and execute tactically.</li>
<li>Data-driven mindset and ability to turn analytics into action.</li>
<li>Systems fluency, with experience working across ERP platforms (e.g., NetSuite, SAP, Oracle).</li>
<li>Cross-functional empathy and ability to understand the needs of engineering, product, deployment, and finance.</li>
<li>Comfortable with ambiguity and ability to build structure in environments that are scaling fast.</li>
<li>High ownership, low ego, and value results over credit.</li>
<li>Ability to anticipate friction before it happens and proactively work to prevent issues rather than react to them.</li>
<li>Demonstrated experience building and owning processes and ontologies in a fast-paced environment.</li>
<li>Expert-level SQL skills and proficiency in Python (or other programming languages).</li>
<li>Experience with BI and analytics tools (Looker, Tableau, Power BI, Palantir Foundry, dbt, Redshift, etc.).</li>
<li>Strong ability to translate technical models into actionable insights for non-technical stakeholders.</li>
<li>Self-starter mindset and ability to prioritise velocity and impact.</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$98,000-$130,000 USD</Salaryrange>
      <Skills>SQL, Python, ERP platforms (e.g., NetSuite, SAP, Oracle), BI and analytics tools (Looker, Tableau, Power BI, Palantir Foundry, dbt, Redshift, etc.), Data modelling, Workflow design, Analytical tools, Supply chain management, Manufacturing, Finance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops and manufactures advanced sensors and artificial intelligence systems for defence and security applications.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4657698007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c53ecdd3-dc7</externalid>
      <Title>Scale Solution Engineer</Title>
      <Description><![CDATA[<p>As a Scale Solution Engineer at Databricks, you will play a critical role in advising customers during their onboarding process. You will work directly with customers to help them onboard and deploy Databricks in their production environment.</p>
<p>Your impact will be significant, ensuring new customers have an excellent experience by providing technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions. You will also work directly with multiple customers concurrently to provide technical solutions.</p>
<p>To succeed in this role, you will need:</p>
<ul>
<li>An undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>1+ years experience in a technical role, preferably in the data or cloud field</li>
<li>Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP</li>
<li>Knowledge of a programming language such as Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow), Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent time management and prioritization skills</li>
<li>Excellent written and verbal communication</li>
</ul>
<p>Bonus: Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public cloud platforms, AWS, Azure, GCP, Python, Scala, SQL, Data Engineering technologies, ETL, DBT, Spark, Airflow, Data Warehousing technologies, Stored Procedures, Redshift, Snowflake, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8408817002</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ce3bb01-4a1</externalid>
      <Title>Scale Solutions Engineer</Title>
      <Description><![CDATA[<p>At Databricks, we aim to empower our customers to solve the world&#39;s most challenging data problems using the Data Intelligence platform. As a Scale Solution Engineer, you will be critical in advising customers during their onboarding. You will work directly with customers to help them onboard and deploy Databricks in their production environment and accelerate Databricks features adoption.</p>
<p>The impact you will have:</p>
<ul>
<li>Ensure new customers have an excellent experience by providing technical assistance early in their journey</li>
<li>Become an expert on the Databricks Platform and guide customers in making the best technical decisions</li>
<li>Work directly with multiple customers concurrently to provide technical solutions</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>3+ years experience in a customer-facing technical role in pre-sales, professional services, consulting or customer success</li>
<li>Experience in one or more of the following:</li>
</ul>
<ul>
<li>Solid understanding of the end-to-end data analytics workflow</li>
<li>Excellent time management and prioritization skills</li>
<li>Knowledge of public cloud platforms AWS, Azure or GCP would be a plus</li>
<li>Knowledge of a programming language - Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow)</li>
<li>Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent written and verbal communication, in English and Portuguese</li>
<li>Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models).</li>
<li>Databricks certification(s)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Data Engineering, Data Warehousing, Python, Scala, SQL, AWS, Azure, GCP, ETL, DBT, Spark, Airflow, Redshift, Snowflake, English, Portuguese, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8391865002</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>886118d3-6a1</externalid>
      <Title>Senior Data Engineer - Data Engineering</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights-based products. Making data-driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data.</p>
<p>Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows.</p>
<p>We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid&#39;s data strategy and a data-first mindset.</p>
<p>Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team.</p>
<p>We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them.</p>
<p>You will be in a high-impact role that will directly enable business leaders to make faster and more informed business judgments based on the datasets you build.</p>
<p>You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on.</p>
<p>You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team.</p>
<p>You will collaborate with and have strong and cross-functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles.</li>
</ul>
<ul>
<li>Have data quality and performance top of mind while designing datasets</li>
</ul>
<ul>
<li>Leading key data engineering projects that drive collaboration across the company.</li>
</ul>
<ul>
<li>Advocating for adopting industry tools and practices at the right time</li>
</ul>
<ul>
<li>Owning core SQL and python data pipelines that power our data lake and data warehouse.</li>
</ul>
<ul>
<li>Well-documented data with defined dataset quality, uptime, and usefulness.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale.</li>
</ul>
<ul>
<li>You&#39;ve have experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)</li>
</ul>
<ul>
<li>You value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow.</li>
</ul>
<ul>
<li>You have experience working with different performant warehouses and data lakes; Redshift, Snowflake, Databricks.</li>
</ul>
<ul>
<li>You have experience building and maintaining batch and real-time pipelines using technologies like Spark, Kafka.</li>
</ul>
<ul>
<li>You appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data.</li>
</ul>
<ul>
<li>You are excited to try out new technologies. You like to produce proof-of-concepts that balance technical advancement and user experience and adoption.</li>
</ul>
<ul>
<li>You like to get deep in the weeds to manage, deploy, and improve low-level data infrastructure.</li>
</ul>
<ul>
<li>You are empathetic working with stakeholders. You listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs.</li>
</ul>
<ul>
<li>You are a champion for data privacy and integrity, and always act in the best interest of consumers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p>We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p>We are always looking for team members that will bring something unique to Plaid!</p>
<p>Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws.</p>
<p>Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>SQL, Python, DBT, Airflow, Redshift, ElasticSearch, Atlanta, Retool, Spark, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and services for developers to connect financial accounts to applications and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/022278b3-0943-44b3-a54b-1de421017589</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd05f3e3-531</externalid>
      <Title>Data/Analytics Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI TAGline Removed
We are seeking passionate and talented Data/Analytics Engineers to join our team.</p>
<p>In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure. You will work with large volumes of data, enabling product teams to access secure and reliable data quickly. Your contributions will support our science team in enhancing the quality of our state-of-the-art AI models and help business users make informed decisions.</p>
<p>Responsibilities</p>
<p>• Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.
• Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions, eg work closely with machine learning teams to support model training, deployment pipelines, and feature stores.
• Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.
• Define and enforce data governance, metadata management, and data lineage standards.
• Ensure data integrity, security, and compliance with industry standards.</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering, Statistics, or a related field.
• 3+ years of experience in data engineering, analytics engineering, or a related role.
• Proficiency in Python and SQL.
• Experience with dbt.
• Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).
• Strong analytical and problem-solving skills, with attention to detail.
• Ability to communicate complex data concepts to both technical and non-technical stakeholders.</p>
<p>Nice to Have</p>
<p>• Experience with machine learning pipelines, MLOps, and feature engineering.
• Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
• Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).
• Background in building self-service data platforms for analytics and AI use cases.</p>
<p>Hiring Process</p>
<p>• Intro call with Recruiter - 30 min
• Hiring Manager Interview - 30 min
• Technical interview - Live Coding (Python/SQL) - 45 min
• Technical interview - System Design - 45 min
• Value talk interview - 30 mins
• References</p>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>
<p>What We Offer</p>
<p>💰 Competitive salary and equity package
🧑‍⚕️ Health insurance
🚴 Transportation allowance
🥎 Sport allowance
🥕 Meal vouchers
💰 Private pension plan
🍼 Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. Its comprehensive AI platform meets on-premises and cloud-based needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>