<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>119c9488-4eb</externalid>
      <Title>Software Engineer, Infrastructure (8+ YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>We currently have openings on: Base Infrastructure: We are looking for strong engineers with leadership experience to join the Serving Infrastructure organisation. You will primarily work on the Base Infrastructure team, whose key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling. Data Infrastructure: The Data Infrastructure team’s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Proactively identify and lead significant improvements to Airtable’s infrastructure, working across teams and product areas to maximise business and engineering impact. Work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. Build clean, reusable, and maintainable abstractions that will be used by Airtable’s engineers for years to come. Take full ownership of components of Airtable’s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p><strong>Who you are</strong></p>
<p>You have at least 8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area or New York City for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000-$339,900 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Apache Spark, Kafka, Apache Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400388002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33521936-dee</externalid>
      <Title>Software Engineer, Infrastructure (2-8 YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>Airtable&#39;s infrastructure is evolving to meet the needs of our fast-growing engineering org. We currently have openings on:</p>
<ul>
<li>Base Infrastructure: The Base Infrastructure team owns the system that powers the core of Airtable&#39;s product--serving Airtable bases. We are investing in the foundations of our homegrown in-memory database. Key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling.</li>
</ul>
<ul>
<li>Compute: The compute pod builds and manages our Kubernetes-based platform that supports every service at Airtable, including all new AI services such as vector databases, AI evals store, and document extraction and understanding services. We have a lot of exciting foundational work in our roadmap, such as overhauling our network stack and service discovery, to simplify service setup and strengthen security, region level disaster recovery, and bringing up compute platform from 0-&gt;1 in a new region, building custom Kubernetes operators for reliably managing some of our most critical workloads.</li>
</ul>
<ul>
<li>Data Infrastructure: The Data Infrastructure team&#39;s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse. This infrastructure is used by Airtable&#39;s data engineers and analysts, as well as product developers building features powered by business data. The team is focused on scaling to petabyte volume, enabling sub-second streaming, tightening data governance, and delivering cost-efficient ML-ready datasets to power Airtable&#39;s native AI products with fresh, high-quality signals.</li>
</ul>
<ul>
<li>Developer Platform: The Developer Platform team sits at the intersection of all engineering at Airtable, focusing on building the internal tooling, frameworks, and CI/CD systems that power our product teams. We strive to streamline developer workflows,from build and test cycles to production deployments,and foster a best-in-class developer experience.</li>
</ul>
<ul>
<li>Storage: The Storage team&#39;s mission is to accelerate product development at Airtable by providing scalable, reliable, and easy-to-use storage abstractions. We use RDS MySQL, DynamoDB, Redis, and TiDB. We&#39;re looking for folks interested in distributed systems and databases who are excited to work on business-critical, petabyte-scale storage systems.</li>
</ul>
<ul>
<li>Traffic: We are looking for founding members of our Traffic Engineering team. We recently formed a Traffic Infrastructure team to ensure that traffic across Airtable&#39;s network and routing infrastructure is managed in a reliable, flexible, and secure manner. This will support improved performance in our secondary regions (EU and Australia) as well as other customer-driven projects.</li>
</ul>
<p>You will own all aspects of building, running, and improving these systems, from the underlying infrastructure all the way to the developer-facing code abstractions.</p>
<p>You will proactively identify and lead significant improvements to Airtable&#39;s infrastructure, working across teams and product areas to maximise business and engineering impact. You will work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. You will build clean, reusable, and maintainable abstractions that will be used by Airtable&#39;s engineers for years to come. You will take full ownership of components of Airtable&#39;s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p>You have 2-8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$187,000-$260,000 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Kubernetes, Apache Spark, Kafka, Apache Flink, RDS MySQL, DynamoDB, Redis, TiDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://www.airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400373002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cdc0a4d-95f</externalid>
      <Title>Staff Software Engineer, Stream Compute</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Stream Compute team at Stripe. As a key member of this team, you will help define and deliver the next generation of Stripe&#39;s Flink-first stream compute infrastructure. This is a unique opportunity to work on some of the hardest problems in operating Flink in production, such as state management, exactly-once processing, performance isolation, and automated recovery.</p>
<p>Your primary responsibilities will include designing, building, and operating stream compute infrastructure with Apache Flink at the center, partnering with product and platform teams across Stripe to understand requirements, unblocking Flink adoption, and improving how stream processing infrastructure is used end-to-end. You will also define and implement operational best practices to improve resilience and reliability at scale, drive fleet-level automation and standardization, and lead initiatives that raise the bar on Flink availability and state durability.</p>
<p>To succeed in this role, you should have experience as a technical lead for team(s) working on distributed systems, including scaling them in fast-moving environments. You should also have hands-on experience with big data technologies such as Flink, Spark, Kafka, Pulsar, or Pinot, and experience developing, maintaining, and debugging distributed systems built with open source tools. Additionally, you should have strong software engineering skills and a passion for Big Data Distributed Systems, as well as the ability to write high-quality code in programming languages like Go, Java, Scala, etc.</p>
<p>If you&#39;re interested in joining our team and contributing to the development of our stream compute infrastructure, please don&#39;t hesitate to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Flink, Kafka, Temporal, AWS services, Distributed systems, Big data technologies, Software engineering, Go, Java, Scala, Streaming infrastructure, Real-time processing frameworks, Control planes, Open source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7767063</Applyto>
      <Location>San Francisco, Seattle, New York, Toronto</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>0841fcf4-9ab</externalid>
      <Title>Data Engineer SE - II</Title>
      <Description><![CDATA[<p>We are on a mission to rid the world of bad customer service by “mobilizing” the way help is delivered. Today’s consumers want an always-available customer service experience that leaves them feeling valued and respected.</p>
<p>Helpshift helps B2B brands deliver this modern customer service experience through a mobile-first approach. We have changed how conversations take place, moving the conversation away from a slow, outdated email and desktop experience to an in-app chat experience that allows users to interact with brands in their own time.</p>
<p>Through our market-leading AI-powered chatbots and automation, we help brands deliver instant and rapid resolutions. Because agents play a key role in delivering help, our platform gives agents superpowers with automation and AI that simply works.</p>
<p><strong>About the Team</strong></p>
<p>Consumers care first and foremost about having their time valued by brands. Brands need insights into their customer service operation to serve their consumers effectively. Such insights and analytics are delivered through various data products like in-app analytics dashboards and data-sharing integrations.</p>
<p>The data platform team is responsible for designing, building, and maintaining the data infrastructure that enables such data and analytics products at scale. We build and manage data pipelines, databases, and other data structures to ensure that the data is reliable, accurate, and easily accessible.</p>
<p>We also enable internal stakeholders with business intelligence and machine learning teams with data ops. This team manages the platform that handles 2 Million events per minute and processes 1+ terabytes of data daily.</p>
<p><strong>About the Role</strong></p>
<ul>
<li>Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users</li>
<li>Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies</li>
<li>Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process</li>
<li>Write design specifications, test, deployment, and scaling plans for the data pipelines</li>
<li>Mentor people in the team &amp; organization</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in building and running data pipelines that scale for TBs of data</li>
<li>Proficiency in high-level object-oriented programming language (Python or Java) is must</li>
<li>Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must</li>
<li>Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc</li>
<li>Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must</li>
<li>Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc</li>
<li>Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus</li>
<li>Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc)</li>
<li>Excellent verbal and written communication skills</li>
<li>Bachelor’s Degree in Computer Science (or equivalent)</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Snowflake, AWS, EMR/Athena, Apache Iceberg/Hudi, Parquet, Apache Spark, Apache Flink, Datalflow/Apache Beam, Airflow, Data modeling, SQL query profiling, data warehousing, PowerBI, Metabase, Tableau, Hex, Sigma</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a company that provides a mobile-first customer service experience for B2B brands. It has over 900 million active monthly consumers and is used by hundreds of leading brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D451DB2325</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>672557eb-bee</externalid>
      <Title>Engineering Manager, Data Platform</Title>
      <Description><![CDATA[<p><strong>Engineering Manager, Data Platform</strong></p>
<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead, mentor, and grow a team of senior and principal engineers</li>
<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>
<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>
<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>
<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>
<li>Ensure high standards in system architecture, code quality, and operational excellence</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>
<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>
<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>
<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>
<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>
<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>
<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>
<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>
<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>
<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Epic Games offers a comprehensive benefits package, including:</p>
<ul>
<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>
<li>Long-term disability and life insurance</li>
<li>401k with competitive match</li>
<li>Unlimited PTO and sick time</li>
<li>Paid sabbatical after 7 years of employment</li>
<li>Robust mental well-being program through Modern Health</li>
<li>Company-wide paid breaks and events throughout the year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Epic Games</Employername>
      <Employerlogo>https://logos.yubhub.co/epicgames.com.png</Employerlogo>
      <Employerdescription>Epic Games is a leading game development company that creates award-winning games and engine technology.</Employerdescription>
      <Employerwebsite>https://www.epicgames.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.epicgames.com/en-US/careers/jobs/5818031004</Applyto>
      <Location>Cary</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>