<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>03224784-9c2</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Job Title: Senior Data Engineering Manager</p>
<p>Location: Dublin, Ireland</p>
<p>Department: R&amp;D</p>
<p>Job Description:</p>
<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>
</ul>
<ul>
<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>
</ul>
<ul>
<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>
</ul>
<ul>
<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>
</ul>
<p>Strategic Impact You&#39;ll Drive:</p>
<ul>
<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>
</ul>
<ul>
<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>
</ul>
<ul>
<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>
</ul>
<p>Recent Wins You&#39;ll Build Upon:</p>
<ul>
<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>
</ul>
<ul>
<li>Data-rich AI apps containerized on Snowflake SPCS.</li>
</ul>
<ul>
<li>A new, modern data catalog solution.</li>
</ul>
<ul>
<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>
</ul>
<ul>
<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>
</ul>
<ul>
<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>
</ul>
<ul>
<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>
</ul>
<ul>
<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>
</ul>
<p>Bonus Points (Our Modern Stack Knowledge):</p>
<ul>
<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>
</ul>
<ul>
<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>
</ul>
<ul>
<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up.</li>
</ul>
<ul>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>
</ul>
<ul>
<li>Regular compensation reviews - we reward great work!</li>
</ul>
<ul>
<li>Pension scheme &amp; match up to 4%.</li>
</ul>
<ul>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>
</ul>
<ul>
<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>
</ul>
<ul>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>
</ul>
<ul>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>
</ul>
<ul>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p>Policies:</p>
<ul>
<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>
</ul>
<ul>
<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f2f48ad-46d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a5a8459-f04</externalid>
      <Title>Engineering Manager of Managers, Data Platform</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p><strong>Who we are</strong></p>
<p>Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities.</p>
<p><strong>About the team</strong></p>
<p>The Big Data Infrastructure organization is a globally distributed team of approximately 40 engineers spread across Dublin, Bangalore, Seattle, and San Francisco. This team is the backbone of the company’s data ecosystem, responsible for building, scaling, and maintaining the highly reliable platforms that power data storage, orchestration, and processing at scale.</p>
<p>As the Head of Big Data Infra, you will lead a global, ~40-person engineering organization responsible for the foundational data platforms that drive the business. Reporting directly to the Head of Compute, you will define the strategic vision and roadmap for the company&#39;s data lake, orchestration pipelines, and batch computing environments.</p>
<p>The team&#39;s technical portfolio spans four core domains:</p>
<ul>
<li>Datalake (Storage): Managing scalable cloud storage and metadata layers, leveraging Amazon S3, Apache Iceberg (metastore and integrations), SAL, and Hive Metastore (HMS).</li>
</ul>
<ul>
<li>Data Orchestration: Ensuring robust pipeline execution and scheduling using Apache Airflow.</li>
</ul>
<ul>
<li>Batch Compute Infra (Data Store): Maintaining foundational data infrastructure and legacy systems, including Hadoop.</li>
</ul>
<ul>
<li>Batch Compute Experience (Data Processing): Optimizing and delivering powerful data processing environments utilizing Apache Spark and Apache Celeborn.</li>
</ul>
<p><strong>What you’ll do</strong></p>
<p>You will move beyond day-to-day management to act as an industry leader, effectively advocating for your organization&#39;s mission and impact. You will be expected to see problems others don&#39;t and rally people to independently create solutions.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Set Strategic Vision: Define the scope, vision, and goals for your organization with little or no guidance. You will anticipate industry trends to influence Stripe&#39;s long-range plans and set direction on a multi-year timeframe.</li>
</ul>
<ul>
<li>Lead at Scale: Manage the achievement of and accountability for broad swaths of programs. You will establish wide-ranging and scaled processes, anticipating and removing roadblocks across multiple teams.</li>
</ul>
<ul>
<li>Drive Operational Excellence: Instill a culture of rigorous thinking and meticulous craftsmanship. You will ensure your organization drives constant improvement in team processes and maintains high standards of operational rigor.</li>
</ul>
<ul>
<li>Indirect Influence: Use indirect influence to steer other teams toward making the right decisions for Stripe. You will effectively communicate your team&#39;s plan and how it links to Stripe&#39;s company vision to cross-functional stakeholders.</li>
</ul>
<ul>
<li>Obsess Over Talent: Proactively invest in the development of the organization and its people at all levels. You will recruit world-class talent and coach your direct reports,who are themselves managers - to elevate the skills of the leadership team.</li>
</ul>
<ul>
<li>Stewardship &amp; Culture: Act as an ambassador and advocate for Stripe, modeling ownership for all other Stripes. You will actively work to increase Stripe&#39;s inclusivity and diversity and use our operating principles to guide decision-making.</li>
</ul>
<p><strong>Who you are</strong></p>
<p>We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>
<p><strong>Minimum requirements</strong></p>
<ul>
<li>Bachelor’s degree or equivalent practical experience with minimum 5 years of experience with software development.</li>
</ul>
<ul>
<li>Minimum 5 years of experience in a technical leadership role; overseeing strategic projects.</li>
</ul>
<ul>
<li>Minimum 3 years of Manager of Managers experience (managing other engineering managers).</li>
</ul>
<ul>
<li>Experience building diverse teams to tackle challenging technical problems.</li>
</ul>
<ul>
<li>Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Strategic Ambiguity: Proven ability to translate chaos into clarity and navigate complex, high-impact work where you must define your own scope.</li>
</ul>
<ul>
<li>Infrastructure at Scale: Successfully shipped and operated critical infrastructure with significant responsibility over funds or critical data.</li>
</ul>
<ul>
<li>Cross-Functional Influence: A track record of getting other teams on board with your vision to support execution in a way that benefits the broader company.</li>
</ul>
<ul>
<li>Curiosity: You enjoy learning and diving into the nuts-and-bolts of how things work (e.g., global money movement rails, currency conversion, or inter-company flows).</li>
</ul>
<ul>
<li>Humility and Adaptability: You are humble and self-aware, with a history of adapting your management approach across different environments and seeking feedback to grow as a leader.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strategic vision, Technical leadership, Project management, Team management, Communication, Problem-solving, Infrastructure at scale, Cross-functional influence, Curiosity, Humility and adaptability, Apache Iceberg, Apache Airflow, Apache Spark, Apache Celeborn, Amazon S3, Hive Metastore, SAL, Cloud storage, Metadata layers, Data orchestration, Batch computing infrastructure, Legacy systems, Hadoop, Global money movement rails, Currency conversion, Inter-company flows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7747391</Applyto>
      <Location>Seattle, San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>93c1356c-a95</externalid>
      <Title>Principal Software Engineer, Web Data - Tech Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for an exceptional Principal Software Engineer to serve as the de facto Technical Lead for our Web Data Acquisition (WDA) team. This is a highly visible, hands-on technical leadership role where you&#39;ll own the architectural direction for crawling systems, evolve and unify crawling platforms into a best-in-class stack, and elevate a high-performing engineering team.</p>
<p>As a Principal Software Engineer, you&#39;ll solve complex distributed systems challenges, build modular tooling that accelerates delivery, and set the standard for observability and operational excellence. You&#39;ll have a dedicated manager handling all HR and administrative responsibilities. A product manager connects business needs with technical work. Your focus is 100% technical leadership, mentorship, and hands-on execution.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Technical Leadership &amp; System Design: Proven experience building web crawling or large-scale data systems from scratch. Strong architectural skills designing scalable, fault-tolerant distributed systems. Track record leading complex technical initiatives and driving architecture direction for teams.</li>
</ul>
<ul>
<li>Data Engineering Expertise: Deep background in large-scale data engineering (terabytes daily). Hands-on experience with cloud data warehouses (BigQuery, Snowflake). Experience with Apache Kafka, Kubernetes (GKE/EKS), and orchestration tools (Airflow).</li>
</ul>
<ul>
<li>Web Crawling &amp; Data Extraction: Deep expertise in web crawling technologies and advanced scraping (Scrapy or similar). Experience extracting structured/unstructured web data and SERP extraction. Knowledge of proxy infrastructure management, anti-bot detection, and ethical crawling.</li>
</ul>
<ul>
<li>Leadership &amp; Team Development: Experience mentoring engineers at all levels and fostering collaborative culture. Strong ability to influence technical direction and establish best practices. Track record hiring, coaching, and developing senior engineers.</li>
</ul>
<p>Ideal Candidate Profile:</p>
<ul>
<li>10+ years software engineering experience. 5+ years focused on data engineering. 3+ years in senior/principal-level technical leadership.</li>
</ul>
<ul>
<li>Strong CS fundamentals (algorithms, data structures, distributed systems). Self-starter who thrives in fast-paced environments.</li>
</ul>
<p>Core Technical Stack:</p>
<ul>
<li>Python &amp; Java</li>
<li>Apache Kafka</li>
<li>GCP (BigQuery, GKE, Vertex AI)</li>
<li>Snowflake &amp; Starburst/Trino</li>
<li>Terraform</li>
<li>Scrapy / Web Scraping Frameworks</li>
<li>Proxy Management Systems</li>
<li>Distributed Systems &amp; Kubernetes</li>
<li>Apache Airflow</li>
<li>Large-Scale ETL Pipelines</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Python, Java, Apache Kafka, Kubernetes, GCP, Snowflake, Terraform, Scrapy, Proxy Management Systems, Distributed Systems, Apache Airflow, Large-Scale ETL Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8378092002</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba5e5f71-701</externalid>
      <Title>FBS Associate Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS Associate Analytics Engineer</p>
<p>We are seeking an FBS Associate Analytics Engineer to join our team. As an FBS Associate Analytics Engineer, you will play a key role in transforming raw data into structured, high-quality datasets that are ready for analysis. You will work on low to moderately complex business problems, receiving coaching and guidance from data leadership. Your primary focus will be on end-to-end data workflow, including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization.</p>
<p>Responsibilities</p>
<ul>
<li>Emerging data infrastructure development with coaching and guidance: Pipeline Design and Development – Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as DBT (Data Build Tool), Apache Airflow, or similar.</li>
<li>Automates data ingestion processes from various sources including databases, APIs, and third party services.</li>
<li>Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery.</li>
<li>Optimizes storage solutions for performance, cost efficiency, and scalability.</li>
<li>Data Modeling - Develops and maintains logical and physical data models to support business analytics.</li>
<li>Creates and manages dimensional models, star/snowflake schemas, and other data structures.</li>
<li>Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages.</li>
<li>Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data.</li>
<li>Technology Stack - Utilizes modern data tools and technologies such as SQL, Python, dbt, Airflow, and cloud platforms like AWS, Azure, or GCP.</li>
<li>Continuous Learning – Stays updated with the latest trends, best practices, and advancements in data engineering and analytics.</li>
<li>Participates in professional development opportunities to enhance technical and analytical skills.</li>
<li>Provides code as requirements for hardening and operationalization by technology with significant coaching, guidance, and feedback.</li>
<li>Performs other duties as assigned.</li>
</ul>
<p>Requirements</p>
<ul>
<li>1+ year of experience working on a Data Environment</li>
<li>Good Analytics mindset</li>
<li>Knowledge in SQL</li>
<li>Strong verbal communication and listening skills.</li>
<li>Demonstrated written communication skills.</li>
<li>Demonstrated analytical skills.</li>
<li>Demonstrated problem solving skills.</li>
<li>Effective interpersonal skills.</li>
<li>Seeks to acquire knowledge in area of specialty.</li>
<li>Possesses strong technical aptitude. Basic experience with SQL or similar, dimensional modeling, pipeline orchestration, building data pipelines to transform data, and BI visualizations.</li>
<li>Python experience is a plus</li>
</ul>
<p>Benefits</p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, DBT, Apache Airflow, Snowflake, Redshift, BigQuery, Data Modeling, Data Transformation, Data Quality Assurance, Cloud Platforms, Python experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/jaxxjRWH9XxkRbr1TCrPb5/remote-fbs-associate-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>