<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8f6ef3b1-c9b</externalid>
      <Title>Technical Program Manager, Compute</Title>
      <Description><![CDATA[<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>
<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>
<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>
<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>
<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>
<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>
<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>
<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>
<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>
<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>
<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>
<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>
<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>
<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>
<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>
<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>
<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>
<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>
<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>
<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Cloud Infrastructure, Cluster Management, Job Scheduling, Resource Orchestration, Compute Capacity Management, GPU or Accelerator Infrastructure, Observability for Infrastructure Systems, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5138044008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c7dc0cb-87c</externalid>
      <Title>Solutions Architect - Storage</Title>
      <Description><![CDATA[<p>As a Solutions Architect at CoreWeave, you will play a vital and dynamic role in helping customers succeed with our cloud infrastructure offerings. You will serve as the primary technical point of contact for customers, establishing strong technical relationships and ensuring their success with CoreWeave&#39;s cloud infrastructure offerings, focusing on storage technologies within high-performance compute (HPC) environments.</p>
<p>Collaborate closely with customers to understand their unique business needs and create, prototype, and deploy tailored solutions that align with their requirements. Lead proof of concept initiatives to showcase the value and viability of CoreWeave&#39;s solutions within specific environments.</p>
<p>Drive technical leadership and direction during customer meetings, presentations, and workshops, addressing any technical queries or concerns that arise. Act as a virtual member of CoreWeave&#39;s Storage product and engineering teams, identifying opportunities for product enhancement and collaborating with engineers to implement your suggestions.</p>
<p>Offer valuable insights on product features, functionality, and performance, contributing regularly to discussions about product strategy and architecture. Conduct periodic technical reviews and assessments of customer workloads, pinpointing opportunities for workload optimization and suggesting suitable solutions.</p>
<p>Stay informed of the latest developments and trends in Kubernetes, cloud computing and infrastructure, sharing your thought leadership with customers and internal stakeholders. Lead the prototyping and initiation of research and development efforts for emerging products and solutions, delivering prototypes and key insights for internal consumption.</p>
<p>Represent CoreWeave at conferences and industry events, with occasional travel as required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $220,000</Salaryrange>
      <Skills>cloud computing concepts, architecture, technologies, storage solutions, Kubernetes, cloud infrastructure, high-performance compute (HPC), storage technologies, file system protocols, infrastructure systems, code contributions to open-source inference frameworks, scripting and automation related to storage technologies, building solutions across multi-cloud environments, client or customer-facing publications/talks on latency, optimization, or advanced model-server architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure company that provides a platform for AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4568531006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd44a200-1ac</externalid>
      <Title>Director of Engineering (Service Foundations)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Service Foundations)</p>
<p>We are seeking a seasoned Director of Engineering to lead our Service Foundations team. As a key member of our executive engineering team, you will be responsible for building and operating distributed systems, driving company-wide efficiency, reliability, and automation.</p>
<p>In this role, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical infrastructure initiatives that integrate AI-driven tooling directly into the infrastructure itself to make it more adaptive, scalable, and intelligent.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Solve real business needs at a large scale by applying your software engineering expertise</li>
<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>
<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>
<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>
<li>Lead and participate in technical, product, and design discussions</li>
<li>Build, manage, and operate highly scalable services in the cloud</li>
<li>Grow leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partner with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Create a culture of excellence on the team while leading with empathy</li>
</ul>
<p>Requirements:</p>
<ul>
<li>20+ years of industry experience building and operating large-scale distributed systems</li>
<li>Proven ability to build, grow, and manage high-performing infrastructure teams, including developing managers and tech leads</li>
<li>Deep experience running large-scale cloud infrastructure systems (AWS, Azure, or GCP), ideally across multiple clouds or regions</li>
<li>Ability to translate requirements from internal engineering teams into clear priorities and execution plans</li>
<li>Fluent across the infrastructure stack , storage, orchestration, observability, and developer platforms , with intuition for how these layers interact</li>
<li>Ability to evaluate and evolve abstractions , knows when to unify, when to localise, and how to reduce cognitive load for product teams</li>
<li>BS in Computer Science (Masters or PhD preferred)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratise data, analytics, and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud infrastructure systems, Distributed systems, Infrastructure as Code, Containerisation, Orchestration, Observability, Developer platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201768002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c078633c-28c</externalid>
      <Title>Senior Engineer, Core API - W&amp;B</Title>
      <Description><![CDATA[<p>You will be responsible for building and evolving the core backend systems and shared infrastructure that power our platform.</p>
<p>A significant portion of backend logic is shared across services, and this role will help define, maintain, and scale that foundation.</p>
<p>You will own and improve internal schema and code generation tooling that ensures consistency and correctness across services.</p>
<p>You will work on and extend our custom job scheduler, improving reliability, observability, and execution guarantees for distributed workloads.</p>
<p>You will contribute to safely execute large-scale concurrent and distributed operations.</p>
<p>You will play a key role in defining and maintaining API standards across teams, ensuring performance, backward compatibility, and clear evolution strategies.</p>
<p>You will collaborate closely with Product and various Engineering teams to design systems that are reliable, scalable, and maintainable over time.</p>
<p>The Core Systems team is responsible for the foundational backend infrastructure that powers Weights &amp; Biases within CoreWeave.</p>
<p>Much of the platform&#39;s critical logic is shared across services, and this role sits at the center of that foundation.</p>
<p>You will work on the systems that other engineers build upon , from execution frameworks and schedulers to schema tooling and API standards.</p>
<p>This is a high-leverage role focused on durability, scalability, and long-term maintainability.</p>
<p>The systems you design and evolve will directly impact reliability, developer velocity, and the ability of the platform to scale with growing workloads.</p>
<p>You&#39;ll collaborate across teams to ensure that shared backend abstractions remain clean, performant, and consistent as we continue to expand our adoption of technologies like GraphQL and gRPC.</p>
<p>If you enjoy owning deep technical infrastructure, shaping engineering standards, and building systems that other engineers depend on every day, this role offers meaningful scope and impact.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>backend engineering experience, designing and maintaining distributed systems, hands-on experience designing and evolving APIs, strong proficiency in Go, Python, or a comparable backend systems language, experience implementing concurrency and parallelism patterns in production systems, familiarity with schema management, code generation tools, or interface definition systems, experience building or operating custom job schedulers, workflow engines, or execution frameworks, experience defining cross-team API standards and governance models, background in high-scale data or ML infrastructure systems, experience improving reliability through observability, metrics, and SLO-driven development practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658736006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd64db3e-49f</externalid>
      <Title>Staff Software Engineer – Customer Experience Intelligence (CXI)</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re shaping the future of how customers experience support at scale. As the Staff Technical Lead for Customer Experience Intelligence, you&#39;ll design intelligent, AI-powered systems that make support faster, smarter, and more effortless.</p>
<p>In this role, you&#39;ll have end-to-end ownership of the architecture and technical strategy behind automation and agentic workflows that reduce mean time to mitigate (MTTM), boost quality, and enable our Support organization to scale impact without scaling headcount. You&#39;ll work hands-on with teams across Support, Product, and Platform Engineering to build seamless systems that anticipate customer needs before they arise.</p>
<p>You&#39;ll lead the technical foundation that transforms how customers experience support , where issues are auto-diagnosed, solutions are delivered instantly, and engineers focus their time on the toughest challenges. Your success will mean customers moving faster, trusting Databricks deeper, and feeling the impact of your systems every day.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the technical vision and architecture for Databricks&#39; Support Automation and Tooling ecosystem</li>
<li>Leading hands-on development of automation to improve customer experience and Support scalability</li>
<li>Driving rapid, iterative development while upholding quality, safety, and reliability standards</li>
<li>Designing agentic workflows that evolve from human-in-the-loop to fully automated systems</li>
<li>Implementing observability, transparency, and rollback mechanisms for AI-driven decisions</li>
<li>Acting as the primary technical interface between Support, Product, and Platform Engineering to align technical roadmaps and unblock dependencies</li>
<li>Setting a high engineering bar for quality, reliability, and maintainability in line with Databricks standards</li>
<li>Mentoring engineers and SMEs across Software and Support Engineering functions</li>
</ul>
<p>We&#39;re looking for someone with:</p>
<ul>
<li>A BS or higher degree in Computer Science or a related field</li>
<li>Technical leadership experience in large projects similar to those described, including automation tooling, distributed systems, and APIs</li>
<li>Extensive full-stack development experience</li>
<li>Proven success designing and deploying production-grade automation in complex technical environments</li>
<li>Hands-on experience with ML-assisted systems, decision support, or agentic automation</li>
<li>Deep familiarity with distributed data platforms, developer tooling, and large-scale infrastructure systems</li>
<li>Understanding of multi-cloud environments (AWS, Azure, GCP), compliance, and security constraints</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $190,000-$261,250 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000-$261,250 USD</Salaryrange>
      <Skills>Automation tooling, Distributed systems, APIs, Full-stack development, ML-assisted systems, Decision support, Agentic automation, Distributed data platforms, Developer tooling, Large-scale infrastructure systems, Multi-cloud environments, Compliance, Security constraints</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and operates the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8416959002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6782038-937</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>As a staff software engineer with a backend focus, you will work with your team to build infrastructure for the Databricks platform at scale.</p>
<p>The impact you&#39;ll have is significant, as our backend teams cover a diverse range of domains, from core compute fabric resource management to service platforms and machine learning infrastructure.</p>
<p>For example, you might work on challenges such as:</p>
<ul>
<li>Supporting Databricks&#39; growth by building foundational infrastructure platforms that enable seamless operation across numerous geographic regions and cloud providers.</li>
</ul>
<ul>
<li>Implementing cloud-agnostic infrastructure abstractions to help Databricks engineers more efficiently manage and operate their services.</li>
</ul>
<ul>
<li>Developing tools and processes that drive engineering efficiency at Databricks.</li>
</ul>
<p>We enhance the developer experience for Databricks engineers across various areas, including programming languages, linters, static analysis, IDEs, remote development environments, automated release pipelines, and test automation frameworks.</p>
<p>Our current focus is on optimizing the Rust development experience across the organization.</p>
<p>To be successful in this role, you will need:</p>
<ul>
<li>8+ years of professional software development experience.</li>
</ul>
<ul>
<li>A bachelor&#39;s degree or higher in Computer Science, a related field, or equivalent experience.</li>
</ul>
<ul>
<li>Proficiency in one or more backend languages such as Java, Scala, or Go.</li>
</ul>
<ul>
<li>Hands-on experience in developing and operating large-scale, critical distributed backend systems.</li>
</ul>
<ul>
<li>Demonstrated leadership throughout all project phases, from inception and design to implementation and operations, with a strong ability to drive architectural decisions and guide teams through complex challenges.</li>
</ul>
<ul>
<li>Experience mentoring engineers, influencing best practices, and fostering a collaborative engineering environment.</li>
</ul>
<ul>
<li>Self-driven and passionate with a strong focus on delivering impact through team collaboration.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills to drive alignment across teams, and contribute to both technical documentation and strategic decisions.</li>
</ul>
<ul>
<li>Experience with infrastructure systems, security, and sensitive data, as well as familiarity with Kubernetes or cloud platforms like AWS, Azure, or GCP.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, Go, backend languages, large-scale distributed backend systems, leadership, mentoring, best practices, collaborative engineering environment, infrastructure systems, security, sensitive data, Kubernetes, cloud platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7642799002</Applyto>
      <Location>Aarhus, Denmark</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78a9b8f2-81c</externalid>
      <Title>Senior Software Engineer - Data Infrastructure</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>
<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>
<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>
<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>
<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience</li>
<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>
<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>
<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>
<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>
<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid builds tools and experiences that thousands of developers use to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
  </jobs>
</source>