<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>12a4cdb3-95b</externalid>
      <Title>Senior Marketing Operations Manager, B2B Sales</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Marketing Operations Manager to architect and optimize our B2B sales-led and channel-driven GTM engine. This role will define and maintain the systems, processes, and operational rigor that align Marketing, SDR, Sales, and Partner teams.</p>
<p>The ideal candidate will have hands-on experience administering Marketo, Salesforce, and LeanData, and deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and evolve the GTM systems architecture, ensuring Salesforce, Marketo, LeanData, ZoomInfo, Qualified, Outreach, and Clay.io work together as a best-in-class, integrated ecosystem.</li>
<li>Lead the design, governance, and optimization of data orchestration workflows using LeanData, including routing, prioritization, handoffs, and conversion logic across Marketing, SDR, and Sales teams.</li>
<li>Design and execute a future-state operational roadmap focused on scaling B2B demand generation, ABM, and partner-led growth through automation, improved data flows, and AI-powered insights.</li>
<li>Build automated lifecycle processes for lead scoring, enrichment, qualification, and cross-functional handoffs using LeanData, Zapier, Clay, Segment, and AI agents.</li>
<li>Enhance sales productivity by implementing agentic workflows (e.g., automated follow-ups, enrichment workflows, SDR assistance tools) in Outreach and Salesforce.</li>
<li>Manage data governance across Salesforce, Marketo, and Segment, ensuring reliable attribution, reporting, and pipeline visibility.</li>
<li>Create AI-informed dashboards and reporting on pipeline performance, lead velocity, conversion, campaign effectiveness, and partner impact.</li>
<li>Partner with RevOps, Sales Systems, and Engineering to operationalize cross-functional processes that reduce manual work and improve efficiency.</li>
<li>Support partner/VAR motions through automated attribution, routing rules, partner engagement workflows, and integrated co-marketing processes.</li>
<li>Continuously evaluate new tools, AI capabilities, and operational improvements that elevate our GTM infrastructure.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years in Marketing Operations or Revenue Operations supporting B2B sales-led funnels.</li>
<li>Hands-on experience administering Marketo, Salesforce, and LeanData.</li>
<li>Deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</li>
<li>Proven ability to design automated workflows, operational processes, and scalable cross-system integrations.</li>
<li>Experience using AI-driven tools or agentic workflows to automate SDR tasks, enrich lead data, or accelerate GTM execution.</li>
<li>Strong analytical, system design, and documentation skills; able to translate business needs into scalable technical workflows.</li>
<li>Experience collaborating with Sales, SDR, RevOps, and System/Engineering teams.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience in FinTech or enterprise B2B SaaS environments.</li>
<li>Familiarity with conversational marketing/ABM platforms like Qualified.</li>
<li>Experience with tools like LeanData and Outreach in support of lead routing and SDR/BDR workflows.</li>
<li>Experience with paid funnel operations is a plus (Google Ads, LinkedIn Ads, etc.).</li>
<li>Understanding of partner/VAR operational workflows and partner attribution logic.</li>
<li>Ability to design scalable integrations using tools like Segment, Zapier, or Workato-style platforms.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $134,696 - $168,370.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$134,696 - $168,370</Salaryrange>
      <Skills>Marketo, Salesforce, LeanData, Lead routing, Lead-to-account matching, Data orchestration workflows, AI-driven tools, Agentic workflows, Automation, Improved data flows, AI-powered insights, Cross-system integrations, Strong analytical skills, System design, Documentation skills, FinTech, Enterprise B2B SaaS, Conversational marketing/ABM platforms, Paid funnel operations, Partner/VAR operational workflows, Scalable integrations</Skills>
      <Category>Marketing</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex provides intelligent finance platforms for companies to manage their expenses and move faster in multiple markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8380680002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c49c483-668</externalid>
      <Title>Senior Marketing Operations Manager, B2B Sales</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. We&#39;re looking for a Senior Marketing Operations Manager to architect and optimize our B2B sales-led and channel-driven GTM engine.</p>
<p>As a Senior Marketing Operations Manager, you will define and maintain the systems, processes, and operational rigor that align Marketing, SDR, Sales, and Partner teams. You will champion operational excellence by improving lead management, automating revenue processes, increasing funnel velocity, and enabling more efficient cross-functional alignment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and evolve the GTM systems architecture, ensuring Salesforce, Marketo, LeanData, ZoomInfo, Qualified, Outreach, and Clay.io work together as a best-in-class, integrated ecosystem.</li>
<li>Lead the design, governance, and optimization of data orchestration workflows using LeanData, including routing, prioritization, handoffs, and conversion logic across Marketing, SDR, and Sales teams.</li>
<li>Design and execute a future-state operational roadmap focused on scaling B2B demand generation, ABM, and partner-led growth through automation, improved data flows, and AI-powered insights.</li>
<li>Build automated lifecycle processes for lead scoring, enrichment, qualification, and cross-functional handoffs using LeanData, Zapier, Clay, Segment, and AI agents.</li>
<li>Enhance sales productivity by implementing agentic workflows (e.g., automated follow-ups, enrichment workflows, SDR assistance tools) in Outreach and Salesforce.</li>
<li>Manage data governance across Salesforce, Marketo, and Segment, ensuring reliable attribution, reporting, and pipeline visibility.</li>
<li>Create AI-informed dashboards and reporting on pipeline performance, lead velocity, conversion, campaign effectiveness, and partner impact.</li>
<li>Partner with RevOps, Sales Systems, and Engineering to operationalize cross-functional processes that reduce manual work and improve efficiency.</li>
<li>Support partner/VAR motions through automated attribution, routing rules, partner engagement workflows, and integrated co-marketing processes.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years in Marketing Operations or Revenue Operations supporting B2B sales-led funnels.</li>
<li>Hands-on experience administering Marketo, Salesforce, and LeanData.</li>
<li>Deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</li>
<li>Proven ability to design automated workflows, operational processes, and scalable cross-system integrations.</li>
<li>Experience using AI-driven tools or agentic workflows to automate SDR tasks, enrich lead data, or accelerate GTM execution.</li>
<li>Strong analytical, system design, and documentation skills; able to translate business needs into scalable technical workflows.</li>
<li>Experience collaborating with Sales, SDR, RevOps, and System/Engineering teams.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience in FinTech or enterprise B2B SaaS environments.</li>
<li>Familiarity with conversational marketing/ABM platforms like Qualified.</li>
<li>Experience with tools like LeanData and Outreach in support of lead routing and SDR/BDR workflows.</li>
<li>Experience with paid funnel operations is a plus (Google Ads, LinkedIn Ads, etc.).</li>
<li>Understanding of partner/VAR operational workflows and partner attribution logic.</li>
<li>Ability to design scalable integrations using tools like Segment, Zapier, or Workato-style platforms.</li>
</ul>
<p>Compensation: The expected salary range for this role is $134,696 - $168,370. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$134,696 - $168,370</Salaryrange>
      <Skills>Marketing Operations, Revenue Operations, Marketo, Salesforce, LeanData, Lead Routing, Lead-to-Account Matching, Data Orchestration, Workflow Automation, AI-Driven Tools, Agentic Workflows, Analytical Skills, System Design, Documentation, Collaboration, Sales, SDR, RevOps, System/Engineering, FinTech, Enterprise B2B SaaS, Conversational Marketing/ABM, Qualified, Outreach, Paid Funnel Operations, Partner/VAR Operational Workflows, Partner Attribution Logic, Scalable Integrations, Segment, Zapier, Workato-style Platforms</Skills>
      <Category>Marketing</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that offers a platform for companies to manage their finances. It provides corporate cards, banking, and spend management tools.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8372597002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0c6077e7-8e1</externalid>
      <Title>Staff Applied AI Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Staff Applied AI Engineer to join our team at Komodo Health. As a Staff Applied AI Engineer, you will be a cross-functional AI leader and strategic thought partner. This role exists to define Komodo&#39;s long-term AI capabilities, set company-wide technical standards, architect foundational AI systems, and guide teams toward scalable, safe, and innovative AI development.</p>
<p>You will influence data strategy, drive build-vs-buy evaluations, and meaningfully shift Komodo&#39;s AI-native infrastructure and culture. Your responsibilities will include:</p>
<ul>
<li>Helping design company-wide AI vision, standards, and reference architectures.</li>
<li>Defining and building foundational AI platforms (e.g., internal agent frameworks, orchestration systems).</li>
<li>Acting as a multiplier by mentoring teams, running workshops, and driving organizational knowledge sharing.</li>
<li>Making high-level technical decisions, including evaluating major build-vs-buy choices for platforms and tooling.</li>
<li>Shaping Komodo&#39;s data strategy from an AI perspective,requirements, quality, orientation, and long-term structure.</li>
<li>Leading complex applied research initiatives that push Komodo into new AI capability frontiers.</li>
<li>Ensuring Komodo&#39;s AI systems meet high bars for reliability, accountability, ethics, and transparency.</li>
</ul>
<p>The ideal candidate will be a recognized expert in applied AI with demonstrated impact across multiple teams or organizational domains. They will have extensive experience architecting end-to-end AI systems, multi-agent architectures, and large-scale orchestration frameworks. They will also have strong fluency in Python, GenAI frameworks (vLLM, Strands, Crew AI), and full-stack system integration.</p>
<p>We offer a competitive salary range of $274,000-$322,000 USD per year, depending on location. This role may be eligible for performance-based bonuses and equity awards. We also offer comprehensive health, dental, and vision insurance, flexible time off and holidays, 401(k) with company match, disability insurance and life insurance, and leaves of absence in accordance with applicable state and local laws and regulations and company policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$274,000-$322,000 USD</Salaryrange>
      <Skills>applied AI, data strategy, build-vs-buy evaluations, AI-native infrastructure, data orchestration, Python, GenAI frameworks, full-stack system integration, deep healthcare data, healthcare system expertise, large-scale distributed data and compute systems</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8512187002</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a5a8459-f04</externalid>
      <Title>Engineering Manager of Managers, Data Platform</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p><strong>Who we are</strong></p>
<p>Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities.</p>
<p><strong>About the team</strong></p>
<p>The Big Data Infrastructure organization is a globally distributed team of approximately 40 engineers spread across Dublin, Bangalore, Seattle, and San Francisco. This team is the backbone of the company’s data ecosystem, responsible for building, scaling, and maintaining the highly reliable platforms that power data storage, orchestration, and processing at scale.</p>
<p>As the Head of Big Data Infra, you will lead a global, ~40-person engineering organization responsible for the foundational data platforms that drive the business. Reporting directly to the Head of Compute, you will define the strategic vision and roadmap for the company&#39;s data lake, orchestration pipelines, and batch computing environments.</p>
<p>The team&#39;s technical portfolio spans four core domains:</p>
<ul>
<li>Datalake (Storage): Managing scalable cloud storage and metadata layers, leveraging Amazon S3, Apache Iceberg (metastore and integrations), SAL, and Hive Metastore (HMS).</li>
</ul>
<ul>
<li>Data Orchestration: Ensuring robust pipeline execution and scheduling using Apache Airflow.</li>
</ul>
<ul>
<li>Batch Compute Infra (Data Store): Maintaining foundational data infrastructure and legacy systems, including Hadoop.</li>
</ul>
<ul>
<li>Batch Compute Experience (Data Processing): Optimizing and delivering powerful data processing environments utilizing Apache Spark and Apache Celeborn.</li>
</ul>
<p><strong>What you’ll do</strong></p>
<p>You will move beyond day-to-day management to act as an industry leader, effectively advocating for your organization&#39;s mission and impact. You will be expected to see problems others don&#39;t and rally people to independently create solutions.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Set Strategic Vision: Define the scope, vision, and goals for your organization with little or no guidance. You will anticipate industry trends to influence Stripe&#39;s long-range plans and set direction on a multi-year timeframe.</li>
</ul>
<ul>
<li>Lead at Scale: Manage the achievement of and accountability for broad swaths of programs. You will establish wide-ranging and scaled processes, anticipating and removing roadblocks across multiple teams.</li>
</ul>
<ul>
<li>Drive Operational Excellence: Instill a culture of rigorous thinking and meticulous craftsmanship. You will ensure your organization drives constant improvement in team processes and maintains high standards of operational rigor.</li>
</ul>
<ul>
<li>Indirect Influence: Use indirect influence to steer other teams toward making the right decisions for Stripe. You will effectively communicate your team&#39;s plan and how it links to Stripe&#39;s company vision to cross-functional stakeholders.</li>
</ul>
<ul>
<li>Obsess Over Talent: Proactively invest in the development of the organization and its people at all levels. You will recruit world-class talent and coach your direct reports,who are themselves managers - to elevate the skills of the leadership team.</li>
</ul>
<ul>
<li>Stewardship &amp; Culture: Act as an ambassador and advocate for Stripe, modeling ownership for all other Stripes. You will actively work to increase Stripe&#39;s inclusivity and diversity and use our operating principles to guide decision-making.</li>
</ul>
<p><strong>Who you are</strong></p>
<p>We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>
<p><strong>Minimum requirements</strong></p>
<ul>
<li>Bachelor’s degree or equivalent practical experience with minimum 5 years of experience with software development.</li>
</ul>
<ul>
<li>Minimum 5 years of experience in a technical leadership role; overseeing strategic projects.</li>
</ul>
<ul>
<li>Minimum 3 years of Manager of Managers experience (managing other engineering managers).</li>
</ul>
<ul>
<li>Experience building diverse teams to tackle challenging technical problems.</li>
</ul>
<ul>
<li>Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Strategic Ambiguity: Proven ability to translate chaos into clarity and navigate complex, high-impact work where you must define your own scope.</li>
</ul>
<ul>
<li>Infrastructure at Scale: Successfully shipped and operated critical infrastructure with significant responsibility over funds or critical data.</li>
</ul>
<ul>
<li>Cross-Functional Influence: A track record of getting other teams on board with your vision to support execution in a way that benefits the broader company.</li>
</ul>
<ul>
<li>Curiosity: You enjoy learning and diving into the nuts-and-bolts of how things work (e.g., global money movement rails, currency conversion, or inter-company flows).</li>
</ul>
<ul>
<li>Humility and Adaptability: You are humble and self-aware, with a history of adapting your management approach across different environments and seeking feedback to grow as a leader.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strategic vision, Technical leadership, Project management, Team management, Communication, Problem-solving, Infrastructure at scale, Cross-functional influence, Curiosity, Humility and adaptability, Apache Iceberg, Apache Airflow, Apache Spark, Apache Celeborn, Amazon S3, Hive Metastore, SAL, Cloud storage, Metadata layers, Data orchestration, Batch computing infrastructure, Legacy systems, Hadoop, Global money movement rails, Currency conversion, Inter-company flows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7747391</Applyto>
      <Location>Seattle, San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a4be76f-140</externalid>
      <Title>FBS Marketing Automation &amp; Integration Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>The team is responsible for architecting and maintaining scalable MarTech solutions, with a focus on data integration, customer journey orchestration, and marketing automation. This team operates within the Data, Tech, and Operations tower of the Direct BU.</p>
<p>The Marketing Automation &amp; Integration Engineer centers on the implementation and optimization of a MarTech data flow pattern involving Snowflake, Segment, Braze, and other SaaS platforms. Key responsibilities include:</p>
<ul>
<li>Design and maintain data pipelines between Snowflake, Segment CDP, Braze, and additional platforms</li>
<li>Implement real-time and batch data ingestion strategies</li>
<li>Manage customer event tracking and identity resolution within Segment</li>
<li>Orchestrate personalized marketing campaigns in Braze using dynamic segmentation and behavioral triggers</li>
<li>Ensure data integrity and feedback loops from Braze back into Snowflake via Segment</li>
<li>Automate data transformations and enrichment using scripting languages</li>
<li>Monitor system performance and troubleshoot integration issues across platforms</li>
</ul>
<p>This position comes with competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Segment CDP, Braze, Snowflake, Scripting Languages (Python / JS), Reverse ETL, Data Orchestration Platforms, Customer Data Schema Design, Data modeling and ETL/ELT Pipeline, API Integrations / Webhooks, Customer journey mapping and automation logic, Familiarity with insurance industry data and customer lifecycle models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services company that provides IT consulting, systems integration, and business process outsourcing services.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qJr4ny8yGpdyCcPXUusbL6/remote-fbs-marketing-automation-%26-integration-engineer-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>12b3e7a7-24b</externalid>
      <Title>Backend Engineer (Data)</Title>
      <Description><![CDATA[<p><strong>Description</strong></p>
<p>Fuse Energy is a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system. We raised $170M from top-tier investors including Multicoin, Balderton, Lakestar, Accel, Creandum, Lowercarbon, Ribbit, Box Group and strategic angels like Nico Rosberg, the Co-Founder of Solana and GPs behind Meta, Revolut, Spotify, Uber and more.</p>
<p>We’re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>
<p>But we’re not stopping there. We’re also building the Energy Network: a decentralised platform of smart devices that rewards users in Energy Dollars for electrifying their homes, shifting usage to off-peak hours, and helping balance the grid. This network strengthens grid stability - a critical foundation for scaling AI data centers and other energy-intensive industries.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs</li>
<li>Own the design and evolution of analytical schemas, translating business logic into structured, intuitive data models</li>
<li>Migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability</li>
<li>Develop and maintain DBT models that reflect our business domain and make data easily accessible for teams</li>
<li>Implement tests and data quality checks to ensure reliable and trustworthy datasets</li>
<li>Identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>
<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>
<li>Hands-on experience working with relational databases, particularly Postgres</li>
<li>Experience designing schemas and building data models that reflect real-world business logic</li>
<li>Familiarity with DBT or similar data transformation frameworks</li>
<li>Strong understanding of data validation, testing, and quality assurance practices</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>
<li>Experience with CI/CD practices for data pipelines and transformations</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Postgres, DBT, Clickhouse, cloud-based data infrastructure, data orchestration tools, CI/CD practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup on a mission to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/f1WFaX5eREjwSWJ8Eo9yzt/hybrid-backend-engineer-(data)-in-london-at-fuse-energy</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>05ea3590-83b</externalid>
      <Title>Backend Engineer (Data)</Title>
      <Description><![CDATA[<p>You will join a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system.</p>
<p>We&#39;re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>
<p><strong>Responsibilities</strong></p>
<p>You will build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs. This includes owning the design and evolution of analytical schemas, translating business logic into structured, intuitive data models. You will also migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability.</p>
<p>You will develop and maintain DBT models that reflect our business domain and make data easily accessible for teams. Additionally, you will implement tests and data quality checks to ensure reliable and trustworthy datasets. You will identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>
<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>
<li>Hands-on experience working with relational databases, particularly Postgres</li>
<li>Experience designing schemas and building data models that reflect real-world business logic</li>
<li>Familiarity with DBT or similar data transformation frameworks</li>
<li>Strong understanding of data validation, testing, and quality assurance practices</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>
<li>Experience with CI/CD practices for data pipelines and transformations</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office-based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Postgres, DBT, Clickhouse, Cloud-based data infrastructure, Data orchestration tools, CI/CD practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup aiming to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/5m73SDXSAwUg5q1c5NGgDA/hybrid-backend-engineer-(data)-in-dubai-at-fuse-energy</Applyto>
      <Location>Dubai</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>015e5c6d-a31</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Why Valvoline Global Operations?</strong></p>
<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>
<p><strong>Job Purpose</strong></p>
<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>
<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>
<ul>
<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>
<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>
<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>
<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>
<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>
<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>
</ul>
<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>
<ul>
<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>
<li>5-7+ years of experience in a data engineering or related role.</li>
<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>
<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>
<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>
<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>
</ul>
<p><strong>Competencies Desired</strong></p>
<ul>
<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>
<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>
<li>SAP BW or SAP HANA experience is a plus.</li>
<li>Experience with BI tools like Power BI or Tableau.</li>
<li>Understanding of data governance frameworks and data security best practices.</li>
<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>
<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>
</ul>
<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>
<ul>
<li>Normal Office environment.</li>
<li>Prolonged periods of computer use and frequent participation in meetings</li>
<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>
</ul>
<ul>
<li>Minimal travel required.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Valvoline Global Operations</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.valvolineglobal.com.png</Employerlogo>
      <Employerdescription>Valvoline Global Operations is a global company that develops future-ready products and provides best-in-class services for the automotive and industrial solutions industry.</Employerdescription>
      <Employerwebsite>https://jobs.valvolineglobal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>c45525ed-5cd</externalid>
      <Title>Machine Learning Engineer, Distributed Data Systems</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Machine Learning Engineer, Distributed Data Systems</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Research</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>
<p><strong>About the Role</strong></p>
<p>As a Research Engineer, Distributed Data Systems, you will design and scale the infrastructure that powers large-scale multimodal training and evaluation at OpenAI. You’ll manage distributed data pipelines, collaborate closely with researchers to translate requirements into robust systems, and harden pipelines that serve as the backbone for Sora’s rapid iteration cycles.</p>
<p>We’re looking for engineers who are detail-oriented, have strong experience with distributed systems, and excel at building reliable infrastructure in high-stakes environments.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security.</li>
</ul>
<ul>
<li>Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient.</li>
</ul>
<ul>
<li>Partner with researchers to deeply understand requirements and translate them into production-ready systems.</li>
</ul>
<ul>
<li>Harden, optimize, and maintain critical data infrastructure systems that power multimodal training and evaluation.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have strong experience with distributed systems and large-scale infrastructure with a strong interest in data.</li>
</ul>
<ul>
<li>Are detail-oriented and bring rigor to building and maintaining reliable systems.</li>
</ul>
<ul>
<li>Demonstrate excellent software engineering fundamentals and organizational skills.</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapid change.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$295K – $445K • Offers Equity</Salaryrange>
      <Skills>Distributed systems, Large-scale infrastructure, Data, Machine learning, Software engineering fundamentals, Organizational skills, Cloud computing, Containerization, DevOps, Data orchestration, Distributed storage, Streaming infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/4a13c764-18c3-4076-ac87-29e05491be07</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>