<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>dcc8a1d6-5a5</externalid>
      <Title>Implementation Director</Title>
      <Description><![CDATA[<p>Asia &amp; Middle East Technology Our team partners with the businesses to build the platforms, systems, and products that our customers use every day. We keep people&#39;s money and data safe, and are at the forefront of driving innovation for our businesses, customers, and colleagues.</p>
<p>In this role, you will define and own the overall implementation and cutover strategy, ensuring alignment across business and technology. You will develop comprehensive plans covering parallel run, big bang migration, and contingency scenarios. You will also lead execution of cutover activities, including worst-case scenario planning, rehearsals, and post-go-live hypercare operating model.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and owning the overall implementation and cutover strategy</li>
<li>Developing comprehensive plans covering parallel run, big bang migration, and contingency scenarios</li>
<li>Leading execution of cutover activities, including worst-case scenario planning, rehearsals, and post-go-live hypercare operating model</li>
<li>Ensuring robust mitigation steps for risks and issues</li>
<li>Bringing together complex dependencies across all workstreams</li>
<li>Ensuring BAU change is interlocked with the programme in broader implementation planning</li>
</ul>
<p>To be successful in the role, you should have technology expertise in delivering Big Bang migrations on key services, experience of leading post implementation activities and Incident Management, highly effective communication skills, business impact expertise, and driving partnership across Business, internal Technology and third party teams.</p>
<p>You&#39;ll achieve more at HSBC. HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within and inclusive and diverse environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Bang migrations, Implementation strategy, Cutover planning, Risk management, Incident management, Communication skills, Business impact analysis, Partnership building</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is a multinational banking and financial services organisation with operations in over 80 countries.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610174523</Applyto>
      <Location>Shanghai</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4e35d55-5d1</externalid>
      <Title>Technical Program Manager, Safeguards (Infrastructure &amp; Evals)</Title>
      <Description><![CDATA[<p>Job Title: Technical Program Manager, Safeguards (Infrastructure &amp; Evals)</p>
<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the Role</p>
<p>Safeguards Engineering builds and operates the infrastructure that keeps Anthropic&#39;s AI systems safe in production , the classifiers, detection pipelines, evaluation platforms, and monitoring systems that sit between our models and the real world. That infrastructure needs to be not just correct, but reliable: when a safety-critical pipeline goes down or degrades, the consequences can be serious, and they can be invisible until someone looks closely.</p>
<p>As a Technical Program Manager for Safeguards Infrastructure and Evals, you&#39;ll own the operational health and forward momentum of this stack. Your primary responsibility is driving reliability , owning the incident-response and post-mortem process, ensuring SLOs are defined and met in partnership with various teams, and making sure that when things go wrong, the right people know, the right actions get taken, and those actions actually get closed out.</p>
<p>Alongside that ongoing operational rhythm, you&#39;ll coordinate the larger platform investments: migrations, eval-platform improvements, and the cross-team dependencies that connect them. This role sits at the intersection of operations and program management. It requires genuine technical depth , you need to understand how these systems work well enough to triage effectively, judge what&#39;s actually safety-critical versus what can wait, and have informed conversations with the engineers building and maintaining them. But the core of the job is keeping the machine running well and the work moving.</p>
<p>What You&#39;ll Do:</p>
<ul>
<li>Own the Safeguards Engineering ops review</li>
<li>Drive the recurring cadence that keeps the team informed and coordinated: surfacing recent incidents and failures, bringing visibility to reliability trends, and making sure the right people are in the room when decisions need to be made.</li>
<li>Drive incident tracking and post-mortem execution</li>
<li>Establish and maintain SLOs with partner teams</li>
<li>Maintain runbook quality and incident-ownership clarity</li>
<li>Drive platform migrations and infrastructure projects</li>
<li>Coordinate evals platform improvements</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Have solid technical program management experience, particularly in operational or infrastructure-heavy environments , you&#39;re comfortable owning a mix of ongoing operational cadences and discrete project work simultaneously.</li>
<li>Understand how production ML systems work well enough to triage incidents intelligently and have substantive conversations with engineers about what&#39;s going wrong and why , you don&#39;t need to write the code, but you need to follow the technical thread.</li>
<li>Are energized by closing loops. Post-mortem action items that never get done, SLOs that no one checks, runbooks that go stale , these things bother you, and you know how to build the processes and follow-ups that fix them.</li>
<li>Can work effectively across team boundaries , comfortable coordinating with partner teams (like Inference) where you don&#39;t have direct authority, and skilled at keeping shared work moving through influence and clear communication.</li>
<li>Thrive in environments where the work shifts between &#39;keep the lights on&#39; and &#39;build something new&#39; , and can context-switch between incident follow-ups and longer-horizon platform projects without dropping either.</li>
<li>Have experience with or strong interest in AI safety , you understand why the reliability of a safety-critical pipeline is a different kind of problem than the reliability of a product feature, and that distinction motivates you.</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with SRE practices, incident management frameworks, or on-call operations at scale.</li>
<li>Have worked on or with evaluation infrastructure for ML systems , understanding how evals get designed, run, and interpreted.</li>
<li>Have experience driving infrastructure migrations in complex, multi-team environments , particularly where the migration touches operational systems that can&#39;t go offline.</li>
<li>Be familiar with monitoring and alerting tooling (PagerDuty, Datadog, or equivalents) and the operational culture around them.</li>
</ul>
<p>Deadline to apply: None, applications will be received on a rolling basis.</p>
<p>The annual compensation range for this role is listed below. For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $290,000-$365,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Operational or Infrastructure-heavy environments, Production ML systems, Incident management frameworks, On-call operations, Evaluation infrastructure for ML systems, Infrastructure migrations, Monitoring and alerting tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108695008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cef9a3ff-75c</externalid>
      <Title>Technical Program Manager, Platform</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Platform, you&#39;ll own the programs that stand up and operate Anthropic&#39;s APIs and serving infrastructure across multiple cloud environments.</p>
<p>This means driving deployments from scoping through production, running the platform work that spans them, and working across API, Platform Foundations, Security, our cloud provider counterparts, and whoever else is on the critical path when dependencies and tradeoffs pile up.</p>
<p>Responsibilities:</p>
<ul>
<li>Own end-to-end program execution for Anthropic’s API across major cloud deployments, from scoping through production launch and steady-state operations</li>
</ul>
<ul>
<li>Drive the platform programs that cut across individual deployments: the shared foundations that get built once and reused, not rebuilt per cloud</li>
</ul>
<ul>
<li>Act as a primary coordination point with cloud provider counterparts, keeping engagement clean across multiple internal teams with touchpoints into the same partner</li>
</ul>
<ul>
<li>Partner with engineering leadership to turn technical direction into executable plans with clear owners, dependencies, and risk tracking</li>
</ul>
<ul>
<li>Build the program scaffolding (roadmaps, status reporting, decision logs, escalation paths) that lets a fast-moving org stay aligned without slowing down</li>
</ul>
<ul>
<li>Drive the hard sequencing conversations when partner commitments, engineering bandwidth, and priorities are in tension, and surface them to leadership with a recommendation</li>
</ul>
<ul>
<li>Identify where program coverage is thin relative to the load and help shape how we staff around it</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of technical program management experience, including ownership of large infrastructure or platform programs with many engineering teams and external partners in the mix</li>
</ul>
<ul>
<li>Have deep technical fluency in cloud APIs, infrastructure, distributed systems, or platform engineering, enough to be a credible partner to senior engineers on architecture and sequencing, not just a tracker of their decisions</li>
</ul>
<ul>
<li>Have run programs spanning organizational boundaries where you had no direct authority over most of the people whose work you depended on, and delivered anyway</li>
</ul>
<ul>
<li>Have direct experience with multi-cloud or hybrid cloud environments, large-scale migrations, or building platform abstraction layers</li>
</ul>
<ul>
<li>Have worked with major cloud providers (AWS, GCP, Azure) or similar large technology partners, and know how to keep those relationships productive when priorities diverge</li>
</ul>
<ul>
<li>Are comfortable operating in ambiguity on the long arc while being ruthlessly concrete on what ships this quarter and who owns it</li>
</ul>
<ul>
<li>Have a track record of making a program get cheaper to run the second and third time, not just landing the first instance</li>
</ul>
<ul>
<li>Thrive in environments where the plan you wrote last month needs rewriting, without losing the thread on what matters</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with production serving infrastructure, inference systems, or ML platform work</li>
</ul>
<ul>
<li>Have moved between senior IC and management roles, or have interest in doing so</li>
</ul>
<ul>
<li>Have worked at a company rebuilding systems and org in flight during rapid scale-up</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>Cloud APIs, Infrastructure, Distributed Systems, Platform Engineering, Program Management, Cloud Providers, Multi-Cloud Environments, Hybrid Cloud Environments, Large-Scale Migrations, Platform Abstraction Layers, Production Serving Infrastructure, Inference Systems, ML Platform Work, Senior IC and Management Roles, Rapid Scale-Up</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157003008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ca221b6f-dca</externalid>
      <Title>Technical Program Manager, Safeguards (Infrastructure &amp; Evals)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Safeguards Engineering builds and operates the infrastructure that keeps Anthropic&#39;s AI systems safe in production. As a Technical Program Manager for Safeguards Infrastructure and Evals, you&#39;ll own the operational health and forward momentum of this stack.</p>
<p>Your primary responsibility is driving reliability , owning the incident-response and post-mortem process, ensuring SLOs are defined and met in partnership with various teams, and making sure that when things go wrong, the right people know, the right actions get taken, and those actions actually get closed out.</p>
<p>Alongside that ongoing operational rhythm, you&#39;ll coordinate the larger platform investments: migrations, eval-platform improvements, and the cross-team dependencies that connect them.</p>
<p>This role sits at the intersection of operations and program management. It requires genuine technical depth , you need to understand how these systems work well enough to triage effectively, judge what&#39;s actually safety-critical versus what can wait, and have informed conversations with the engineers building and maintaining them.</p>
<p>But the core of the job is keeping the machine running well and the work moving.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the Safeguards Engineering ops review</li>
<li>Drive the recurring cadence that keeps the team informed and coordinated: surfacing recent incidents and failures, bringing visibility to reliability trends, and making sure the right people are in the room when decisions need to be made.</li>
<li>Drive incident tracking and post-mortem execution</li>
<li>Establish and maintain SLOs with partner teams</li>
<li>Maintain runbook quality and incident-ownership clarity</li>
<li>Drive platform migrations and infrastructure projects</li>
<li>Coordinate evals platform improvements</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Solid technical program management experience, particularly in operational or infrastructure-heavy environments</li>
<li>Understanding of how production ML systems work well enough to triage incidents intelligently and have substantive conversations with engineers about what&#39;s going wrong and why</li>
<li>Ability to work effectively across team boundaries</li>
<li>Experience with or strong interest in AI safety</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with SRE practices, incident management frameworks, or on-call operations at scale</li>
<li>Familiarity with monitoring and alerting tooling (PagerDuty, Datadog, or equivalents)</li>
<li>Experience driving infrastructure migrations in complex, multi-team environments</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Operational or Infrastructure-heavy Environments, Production ML Systems, Incident Tracking and Post-Mortem Execution, Service-Level Objectives (SLOs), Runbook Quality and Incident-Ownership Clarity, Platform Migrations and Infrastructure Projects, Evals Platform Improvements, SRE Practices, Incident Management Frameworks, On-Call Operations at Scale, Monitoring and Alerting Tooling, Infrastructure Migrations in Complex, Multi-Team Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.ai.png</Employerlogo>
      <Employerdescription>Anthropic develops artificial intelligence systems. It has a growing team of researchers, engineers, and business leaders.</Employerdescription>
      <Employerwebsite>https://anthropic.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108695008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80842249-d89</externalid>
      <Title>Manager, Enterprise Support (London, United Kingdom)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Manager, Enterprise Support to lead our Enterprise Support team and ensure we deliver exceptional experiences to our customers.</p>
<p>As a leader in our Enterprise Support organization, you&#39;ll partner closely with Sales, Product, Engineering, and Support Operations to improve workflows, unlock product insights, and advocate for meaningful system changes.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading and developing the Enterprise Support team, setting a high bar for customer experience, quality, and performance</li>
<li>Managing, coaching, and empowering the team to meet the KPIs most critical to Enterprise success</li>
<li>Partnering with Product Support Operations to recommend and implement operational improvements</li>
<li>Collaborating closely with Sales leadership to unblock high-value customers and support complex organization migrations</li>
<li>Working with Voice of the Customer, Product, and Engineering teams to surface meaningful insights that drive product and journey improvements</li>
</ul>
<p>We&#39;re looking for someone with 4+ years of experience leading high-performing support teams, primarily serving enterprise customers in technical SaaS environments. You should have consistently focused on elevating both the customer and employee experience through continuous improvement, and have a proven ability to partner cross-functionally with Sales and Engineering to advance meaningful customer outcomes.</p>
<p>If you&#39;re excited about this role and have the skills and experience we&#39;re looking for, please apply!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>leadership, customer experience, quality, performance, technical SaaS environments, product insights, operational improvements, complex organization migrations, voice of the customer, product development</Skills>
      <Category>Support</Category>
      <Industry>Software</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a software company that provides a platform for design collaboration.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5740382004</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c0df50e1-9cd</externalid>
      <Title>Consultant, Developer Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Cloud Engineer for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>
</ul>
<ul>
<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>
</ul>
<ul>
<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>
</ul>
<ul>
<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>
</ul>
<ul>
<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>
</ul>
<ul>
<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>
</ul>
<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>
</ul>
<ul>
<li>Demonstrated experience with:</li>
</ul>
<ul>
<li>Developing serverless code in a CI/CD pipeline using an Agile methodology.</li>
</ul>
<ul>
<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP.</li>
</ul>
<ul>
<li>Scripting languages.</li>
</ul>
<ul>
<li>A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills.</li>
</ul>
<ul>
<li>Infrastructure as code tools like Terraform.</li>
</ul>
<ul>
<li>Strong experience with APIs.</li>
</ul>
<ul>
<li>CI/CD pipelines using Azure DevOps or Git.</li>
</ul>
<ul>
<li>Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc.</li>
</ul>
<ul>
<li>Good understanding and knowledge of:</li>
</ul>
<ul>
<li>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs.</li>
</ul>
<ul>
<li>Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP.</li>
</ul>
<ul>
<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>You have worked with a Cybersecurity company or products and have performed migrations using migration tools.</li>
</ul>
<ul>
<li>You have developed application security and performance capabilities.</li>
</ul>
<ul>
<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</li>
</ul>
<ul>
<li>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3, You have worked with a Cybersecurity company or products and have performed migrations using migration tools, You have developed application security and performance capabilities, Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty, The work will be performed in English. Fluency in a second regional European language is a strong advantage</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare provides a network that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7383015</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9094df7f-543</externalid>
      <Title>Salesforce Administrator</Title>
      <Description><![CDATA[<p>Job Title: Salesforce Administrator</p>
<p>About Dialpad</p>
<p>Dialpad is the AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform. With over 70,000 companies worldwide relying on Dialpad, we&#39;re leading the shift to Agentic AI: intelligent agents that automate workflows, resolve customer issues, and accelerate revenue in real-time.</p>
<p>Your Role</p>
<p>As a Salesforce Technical Administrator, you&#39;ll own the administration, configuration, and day-to-day reliability of our Salesforce CRM environment, including key CPQ and sales support workflows. You&#39;ll work closely with Salesforce Engineers, Systems Analysts, developers, and internal stakeholders to deliver scalable functionality, troubleshoot platform issues, and improve the systems that support our global Sales teams.</p>
<p>Responsibilities</p>
<ul>
<li>Own advanced Salesforce administration and configuration across page layouts, fields, objects, validation rules, permissions, and sharing models.</li>
<li>Build and enhance automation using Record-Triggered Flows, Screen Flows, and Autolaunched Flows to streamline business processes.</li>
<li>Support Salesforce CPQ administration, including pricing waterfall logic, price rules, product rules, and advanced approvals.</li>
<li>Troubleshoot issues across configuration, CPQ, integrations, pricing, quoting, and data quality to keep the platform running at speed and with accuracy.</li>
<li>Resolve production issues and support cases within defined SLAs using structured incident management practices.</li>
<li>Partner with developers on more complex requirements and contribute basic Apex, LWC, Visualforce, and SOQL-based solutions for minor enhancements and troubleshooting.</li>
<li>Support deployments and release activities using tools such as Copado, including migrating configuration and minor code changes through agile sprint cycles.</li>
<li>Maintain strong platform documentation, capture recurring issues and solutions, and apply Salesforce best practices for scalability and maintainability.</li>
</ul>
<p>Requirements</p>
<ul>
<li>A bachelor&#39;s degree in a technical discipline or equivalent professional experience.</li>
<li>4+ years of Salesforce administration and support experience in a complex, integrated enterprise environment.</li>
<li>Salesforce Administration expertise in Sales Cloud, including strong experience with configuration, Flows, and the Salesforce security model.</li>
<li>Hands-on knowledge of Salesforce CPQ processes and the ability to support pricing and quoting operations end-to-end.</li>
<li>Working knowledge of Salesforce development concepts such as Apex, LWC, Visualforce, and SOQL, along with experience operating in agile delivery environments.</li>
<li>Experience with Salesforce data management, including migrations, retention, data quality, and issue resolution in integrated enterprise systems.</li>
<li>Familiarity with tools such as Outreach, Marketo, Git, Workrails, Jira, and Copado.</li>
<li>Strong analytical, problem-solving, and communication skills, with the ability to manage a high volume of requests across stakeholders with different levels of technical expertise.</li>
</ul>
<p>Why Join Dialpad</p>
<ul>
<li>Work at the center of the AI transformation in business communications</li>
<li>Build and ship agentic AI products that are redefining how companies operate</li>
<li>Join a team where AI amplifies every employee&#39;s impact</li>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential. We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection. Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>
<p>Don&#39;t meet every single requirement? If you&#39;re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn&#39;t meet every qualification, we encourage you to apply. Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce Administration, Sales Cloud, Configuration, Flows, Security Model, CPQ Processes, Pricing and Quoting Operations, Apex, LWC, Visualforce, SOQL, Agile Delivery Environments, Data Management, Migrations, Retention, Data Quality, Issue Resolution, Outreach, Marketo, Git, Workrails, Jira, Copado</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is the AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8496643002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58d220e6-02a</externalid>
      <Title>Senior Site Reliability Engineer, Tenant Services: Geo</Title>
      <Description><![CDATA[<p>Job Title: Senior Site Reliability Engineer, Tenant Services: Geo</p>
<p>We are looking for a skilled Senior Site Reliability Engineer to join our Tenant Services, Geo team. As a Senior Site Reliability Engineer, you will be responsible for ensuring the smooth operation of our user-facing services and production systems.</p>
<p>About Us</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. It enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</p>
<p>Responsibilities</p>
<ul>
<li>Execute Dedicated Geo migrations and cutovers end-to-end, including planning, pre-cutover validation, execution, and post-cutover verification and cleanup.</li>
<li>Join the team&#39;s shift and weekend coverage rotation for Dedicated cutovers across EMEA and US hours, and participate in the SaaS Site Reliability Engineering (SRE) on-call rotation to respond to incidents that impact GitLab.com availability.</li>
<li>Operate and improve the Geo operational surface for Dedicated, including:</li>
<li>Environment preparation and data hygiene checks prior to migrations.</li>
<li>Execution of replication, validation, and cutover procedures.</li>
<li>Handling Geo-related escalations from Support and internal partners.</li>
<li>Design, build, and maintain automation, tooling, and runbooks that make migrations, cutovers, and Geo escalations as &#39;boring&#39; and repeatable as possible.</li>
<li>Run our infrastructure with tools such as Ansible, Chef, Terraform, GitLab CI/CD, and Kubernetes; contribute improvements back to GitLab&#39;s product and infrastructure where appropriate.</li>
<li>Build and maintain monitoring, alerting, and dashboards that:</li>
<li>Detect symptoms early, not just outages.</li>
<li>Track migration and cutover success rates, duration, rollback frequency, and related SLOs.</li>
<li>Collaborate closely with:</li>
<li>The core Geo team on improving Geo features and operability.</li>
<li>Dedicated migrations and Support on migration planning, customer communications, and escalation handling.</li>
<li>Other Infrastructure teams on capacity planning, disaster recovery, and reliability improvements.</li>
<li>Contribute to readiness reviews, incident reviews, and root cause analyses, turning learnings into changes in automation, process, or product.</li>
<li>Document every action, including runbooks, architecture decisions, and post-incident reviews, so your findings turn into repeatable practices and automation.</li>
<li>Proactively identify and reduce toil by automating repetitive operational work and simplifying migration workflows.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience operating highly-available distributed systems at scale, ideally in a SaaS environment with customer-facing SLAs.</li>
<li>Hands-on experience with at least one major cloud provider (e.g., Google Cloud Platform or Amazon Web Services), including networking, storage, and managed services.</li>
<li>Experience with Kubernetes and its ecosystem (e.g., Helm), including deploying and troubleshooting workloads.</li>
<li>Experience with infrastructure as code and configuration management tools such as Terraform, Ansible, or Chef.</li>
<li>Strong programming skills in at least one general-purpose language (preferably Go or Ruby) and proficiency with scripting (e.g., Shell, Python).</li>
<li>Experience with observability systems (e.g., Prometheus, Grafana, logging stacks) and using metrics and logs to troubleshoot performance and reliability issues.</li>
<li>Practical exposure to data replication, backup/restore, or migration scenarios (e.g., database replication, storage replication, or Geo-like technologies) where data integrity and downtime risk must be carefully managed.</li>
<li>Comfort participating in an on-call rotation, investigating incidents across the stack, and driving follow-through on corrective actions.</li>
<li>Ability to engage directly with enterprise customers during migrations and incidents, including on live calls and through clear written updates.</li>
<li>Ability to clearly define problems, propose options, and think beyond immediate fixes to improve systems and processes over time.</li>
<li>Ability to be a &#39;manager of one&#39;: self-directed, organized, and able to drive work to completion in a remote, asynchronous environment.</li>
<li>Strong written and verbal communication skills, with a bias toward clear, asynchronous documentation and collaboration.</li>
<li>Alignment with our company values and a commitment to working in accordance with those values.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience working with disaster recovery technologies.</li>
<li>Experience with managed/hosted environments similar to GitLab Dedicated, including regulated or compliance-sensitive customers (e.g., SOC2, ISO).</li>
<li>Prior work on large-scale data migrations or cutovers where customer data integrity, performance, and downtime risk had to be carefully balanced.</li>
<li>Hands-on experience designing and operating database replication, backup/restore, and cutover workflows (for example, PostgreSQL or cloud-managed equivalents such as AWS RDS), including planning and executing low-risk migrations for large datasets.</li>
<li>Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms.</li>
<li>Familiarity with GitLab (self-managed or SaaS), and/or contributions to open source projects.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience operating highly-available distributed systems at scale, Hands-on experience with at least one major cloud provider, Experience with Kubernetes and its ecosystem, Experience with infrastructure as code and configuration management tools, Strong programming skills in at least one general-purpose language, Experience working with disaster recovery technologies, Experience with managed/hosted environments similar to GitLab Dedicated, Prior work on large-scale data migrations or cutovers, Hands-on experience designing and operating database replication, backup/restore, and cutover workflows, Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform for DevSecOps. It has over 50 million registered users and over 50% of the Fortune 100 trust it to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8490453002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5834e3ad-7b2</externalid>
      <Title>Senior Site Reliability Engineer - Security and Data Systems (Federal)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Senior Site Reliability Engineer (SRE) - Security and Data Systems</strong></p>
<p>Our company is seeking a highly skilled Senior Site Reliability Engineer to join our team. We are a SaaS company specializing in securing large-scale systems. This role is a blend of software engineering and systems administration, where you&#39;ll be responsible for building and maintaining highly reliable, scalable, and secure infrastructure. You will be a key contributor, applying your expertise to automate manual processes and proactively solve complex problems before they become incidents, handling incidents, and includes on-call shifts.</p>
<p>*This position requires the ability to access U.S. National Security information. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Platform &amp; Reliability: Design, build, and maintain the core infrastructure that underpins our security SaaS offerings, ensuring high availability, performance, and scalability. This includes building and operating the tooling for our Snowflake data systems.</li>
</ul>
<ul>
<li>Automation: Develop robust automation using code to eliminate toil and ensure consistency across our environments. You&#39;ll be a key driver in automating everything from infrastructure provisioning to application deployment and incident response.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Work closely with our security teams to embed a security-first mindset into all our processes and infrastructure. You will be responsible for ensuring our systems and data platforms are compliant with industry standards.</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and be a primary responder for critical incidents, leading root cause analysis and implementing preventative measures to ensure issues don&#39;t recur.</li>
</ul>
<ul>
<li>Collaboration: Partner with development, data science, and security teams to provide expert guidance on architectural decisions, best practices, and the implementation of new services.</li>
</ul>
<p><strong>Key Skills &amp; Qualifications</strong></p>
<ul>
<li>U.S. Person Status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee)</li>
</ul>
<ul>
<li>Strong Coding Skills: You are a developer at heart and are comfortable writing production-level code to solve complex operational challenges.</li>
</ul>
<ul>
<li>Infrastructure as Code (IaC): Deep experience with Terraform for provisioning and managing cloud infrastructure and services.</li>
</ul>
<ul>
<li>Continuous Delivery: Familiarity with modern CI/CD practices and tools, particularly Spinnaker, to automate and standardize our release pipelines.</li>
</ul>
<ul>
<li>Containerization &amp; Orchestration: Expertise in container technologies and hands-on experience managing large-scale, production-ready clusters with Kubernetes.</li>
</ul>
<ul>
<li>Database Migrations: Experience with database schema management tools like Flyway for safely and reliably handling database changes.</li>
</ul>
<ul>
<li>Data Systems: Direct experience with large-scale data systems, specifically with the Snowflake platform.</li>
</ul>
<ul>
<li>AI/ML Experience (a plus): Experience or a strong interest in AI/ML, particularly how these technologies can be applied to improve reliability, security, and operational efficiency (e.g., AIOps, predictive analysis).</li>
</ul>
<ul>
<li>Problem-Solving: Excellent analytical and problem-solving skills with a proactive approach to identifying and addressing potential issues.</li>
</ul>
<p>This role requires in-person onboarding and travel to our San Francisco Office during the first week of employment.</p>
<p>#LI-Hybrid #LI-TM</p>
<p>(P18058_3355591)</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$147,000-$202,400 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$147,000-$202,400 USD</Salaryrange>
      <Skills>U.S. Person Status, Strong Coding Skills, Infrastructure as Code (IaC), Continuous Delivery, Containerization &amp; Orchestration, Database Migrations, Data Systems, AI/ML Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a SaaS company specializing in securing large-scale systems.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7591606</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc3534c1-6a6</externalid>
      <Title>Engineering Manager, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As the Engineering Manager, GitLab Delivery - Operate, you&#39;ll guide a globally distributed team focused on making it easier for customers to deploy, upgrade, and run GitLab reliably in their own infrastructure.</p>
<p>You&#39;ll help shape the systems and tooling that support environments ranging from single-node virtual machines to large Kubernetes clusters, with a focus on reliability, operational simplicity, upgrade velocity, and zero-downtime capabilities across GitLab.com, GitLab Dedicated, and self-managed deployments.</p>
<p>In this role, you&#39;ll partner closely with a Product Manager and work across Infrastructure Platforms to connect customer needs and business goals with practical engineering choices.</p>
<p>This is a hands-on leadership opportunity for someone who wants to support a high-performing team while influencing how GitLab is delivered at scale.</p>
<p>In your first year, you&#39;ll help the team deliver better deployment and upgrade experiences, guide technical direction in areas like Kubernetes Operators, Helm charts, and cloud-native deployment architectures, and contribute to incident management to help support the availability of GitLab.com.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Guide a globally distributed engineering team and create an environment where team members can do strong work and grow in an all-remote, asynchronous setting.</li>
</ul>
<ul>
<li>Hire onboard, and develop team members who align with GitLab&#39;s values and contribute to an outcome-focused engineering organization.</li>
</ul>
<ul>
<li>Manage and improve agile, asynchronous workflows so the team can deliver deployment tooling and services iteratively and reliably.</li>
</ul>
<ul>
<li>Partner with Product Management and engineering peers across Infrastructure Platforms to align team priorities with customer needs and business goals.</li>
</ul>
<ul>
<li>Own the reliability, upgrade experience, and operational simplicity of GitLab deployments across self-managed environments, GitLab.com, and GitLab Dedicated.</li>
</ul>
<ul>
<li>Improve deployment patterns, observability, zero-downtime capabilities, and upgrade orchestration for customers running GitLab on their own infrastructure.</li>
</ul>
<ul>
<li>Apply technical judgment in areas such as Kubernetes Operators, Helm charts, and stateful application delivery to guide choices and unblock the team.</li>
</ul>
<ul>
<li>Participate in incident management and work with reliability and development teams to help maintain the availability of GitLab.com.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience guiding deployment tooling, platform engineering, or site reliability engineering teams that operate at meaningful scale.</li>
</ul>
<ul>
<li>Strong technical knowledge of Kubernetes Operators, Helm charts for stateful applications, and upgrade orchestration patterns.</li>
</ul>
<ul>
<li>Familiarity with cloud-native deployment architectures, database lifecycle management, schema migrations, and zero-downtime upgrade strategies.</li>
</ul>
<ul>
<li>Experience working on enterprise-scale or consumer-scale platforms, ideally in a product-focused software environment.</li>
</ul>
<ul>
<li>Ability to investigate complex deployment and operational issues and explain tradeoffs clearly to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience building high-performing, distributed teams and supporting team members in an asynchronous, all-remote environment.</li>
</ul>
<ul>
<li>Effective cross-functional skills across functions such as Infrastructure, Support, and Customer Success to improve customer outcomes.</li>
</ul>
<ul>
<li>Openness to diverse paths into the role, including transferable skills, formal computer science education, or equivalent practical experience, along with interest in open source and developer tools.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The GitLab Delivery - Operate team is part of the Infrastructure Platforms department, which enables how GitLab operates, scales, and is delivered across GitLab.com, GitLab Dedicated, and self-managed offerings.</p>
<p>We are a globally distributed team that owns deployment tooling and operational patterns to help customers run GitLab reliably on infrastructure ranging from virtual machines to Kubernetes clusters.</p>
<p>We work asynchronously across regions and work closely with other Infrastructure teams, along with Support and Customer Success, to turn lessons from operating GitLab at scale into product and tooling improvements that benefit customers across all deployment models.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes Operators, Helm charts, cloud-native deployment architectures, database lifecycle management, schema migrations, zero-downtime upgrade strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting them to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463917002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>982dd81e-416</externalid>
      <Title>Principal Database Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>
<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>
</ul>
<ul>
<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>
</ul>
<ul>
<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>
</ul>
<ul>
<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>
</ul>
<ul>
<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>
</ul>
<ul>
<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>
</ul>
<ul>
<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>
</ul>
<ul>
<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>
</ul>
<ul>
<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>
</ul>
<ul>
<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>
</ul>
<ul>
<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>
</ul>
<ul>
<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>
</ul>
<ul>
<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>
</ul>
<ul>
<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>
</ul>
<ul>
<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$157,900-$338,400 USD</Salaryrange>
      <Skills>PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8231379002</Applyto>
      <Location>Remote, EMEA; Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>81b4e02b-98a</externalid>
      <Title>Technical Project Manager</Title>
      <Description><![CDATA[<p>We are seeking a Technical Project Manager to join our Global Services team. As a Technical Project Manager, you will be responsible for managing customer implementation for our developing solution set. You will partner with Engineering Teams to plan and deliver a maintainable private cloud Regrello platform, work with Customer Success Managers and customers to advise on solution design for key manufacturing use cases, and enable customer technical resources to become experts in the technical aspects of the Regrello solution. You will also act as the key point of contact to manage technical deployments of customer-hosted Regrello solutions and AI capabilities, ensure a gold-standard experience for our largest and most strategic customers, and stay on the leading edge of Regrello&#39;s product and technical offerings. We are looking for someone with 8+ years of experience in Technical Project Management, with a strong focus on ERP implementations (SAP or Oracle) and large-scale enterprise system migrations to the cloud. You should have extensive experience managing end-to-end ERP implementations, upgrades, and migrations, strong expertise in application release management, change control processes, and system cutovers, and proven ability to lead cross-functional teams through complex ERP related deployments. You should also have a deep understanding of change management principles to drive user adoption, minimize disruption, and align stakeholders throughout ERP implementations, skilled in client and vendor management, and excellent communication skills for into customer-friendly presentations and executive reports. We offer industry-leading compensation with equity in the company, excellent healthcare benefits, flexible remote work from anywhere in the continental US and European time zones, and quarterly &#39;on-sites&#39; with the entire company to bring us closer together.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$135,000-190,000 per year</Salaryrange>
      <Skills>Technical Project Management, ERP Implementations (SAP or Oracle), Large-Scale Enterprise System Migrations to the Cloud, Application Release Management, Change Control Processes, System Cutovers, Client and Vendor Management, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Regrello</Employername>
      <Employerlogo>https://logos.yubhub.co/regrello.com.png</Employerlogo>
      <Employerdescription>Regrello is a 60-person startup that provides an AI-driven platform for automating manufacturing and supply chain processes.</Employerdescription>
      <Employerwebsite>https://regrello.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/regrello/7c4794ee-ea9d-40eb-9ac7-5a743137a616</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>22fe5cb2-ba9</externalid>
      <Title>Engineering Manager, Datastores</Title>
      <Description><![CDATA[<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity.</p>
<p>This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers.</p>
<p>We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>
<p>We&#39;re looking for an Engineering Manager, Datastores to lead the team responsible for the reliability, scalability, and evolution of Webflow’s core production databases , primarily MongoDB and PostgreSQL. This team operates at the heart of our application and hosting stack, enabling product teams to ship confidently while maintaining high standards of performance, durability, security, and data residency.</p>
<p>Webflow’s product and hosting platform operates at a significant scale. The Datastores team sits at a critical boundary between application velocity and system durability. This is a high-leverage leadership role at the core of Webflow’s infrastructure strategy.</p>
<p><strong>About the role:</strong></p>
<ul>
<li>Lead and grow a team of Database engineers responsible for MongoDB and PostgreSQL in production.</li>
<li>Own the operational excellence of our database layer, including availability, durability, performance, cost efficiency, and data residency.</li>
<li>Drive roadmap and strategy for multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, and infrastructure automation (Pulumi/Terraform).</li>
<li>Partner with Product Engineering to guide new access patterns, review high-impact launches for database risk, and establish guardrails that enable velocity without compromising reliability.</li>
<li>Improve reliability through proactive failure-mode detection, clear SLOs, actionable alerting, and high-quality incident response and retrospectives.</li>
<li>Build self-service tooling and paved roads for migrations, connection management, indexing, and query best practices.</li>
<li>Mentor and grow senior and staff engineers while contributing to broader infrastructure strategy across AWS, Kubernetes, and stateful systems architecture.</li>
</ul>
<p><strong>About you:</strong></p>
<ul>
<li>BS / BA college degree or relevant experience</li>
<li>Business-level fluency to read, write and speak in English</li>
<li>2+ years of experience leading high-performing engineering teams.</li>
<li>6+ years of hands-on experience operating and scaling production databases (MongoDB and/or PostgreSQL preferred).</li>
<li>Experience running business-critical, high-throughput systems with strong availability and durability requirements.</li>
</ul>
<p>You’ll thrive in this role if you:</p>
<ul>
<li>Bring deep expertise in operating and scaling production databases (e.g., replication, failover, indexing, query planning, migrations) and have led teams supporting stateful, multi-region systems with strict uptime requirements.</li>
<li>Balance strong architectural judgment with pragmatism , evolving our datastore strategy while enabling product teams to ship quickly and safely.</li>
<li>Think in terms of SLOs, capacity models, and long-term architectural trade-offs, with hands-on experience in infrastructure as code (Pulumi/Terraform), Kubernetes, and AWS.</li>
<li>Bring strong systems-level thinking to performance and reliability, identifying root causes across application, database, and infrastructure layers and building preventative solutions.</li>
<li>Lead calmly through high-severity incidents, drive blameless postmortems and systemic improvements, and build strong cross-functional relationships grounded in craftsmanship and continuous improvement.</li>
<li>Stay curious and open to growth-Demonstrate a proactive embrace of AI, actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>
</ul>
<p><strong>Our Core Behaviors:</strong></p>
<ul>
<li>Build lasting customer trust.</li>
<li>Win together.</li>
<li>Reinvent ourselves.</li>
<li>Deliver with speed, quality, and craft.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Ownership in what you help build.</li>
<li>Health coverage that actually covers you.</li>
<li>Support for every stage of family life.</li>
<li>Time off that’s actually off.</li>
<li>Wellness for the whole you.</li>
<li>Invest in your future.</li>
<li>Monthly stipends that flex with your life.</li>
<li>Bonus for building together.</li>
</ul>
<p><strong>Be you, with us:</strong></p>
<p>At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database engineering, MongoDB, PostgreSQL, infrastructure automation, Pulumi/Terraform, Kubernetes, AWS, leadership, team management, operational excellence, availability, durability, performance, cost efficiency, data residency, multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, self-service tooling, paved roads, migrations, connection management, query best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Webflow</Employername>
      <Employerlogo>https://logos.yubhub.co/webflow.com.png</Employerlogo>
      <Employerdescription>Webflow is a privately held company that builds a Digital Experience Platform.</Employerdescription>
      <Employerwebsite>https://webflow.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/webflow/jobs/7648674</Applyto>
      <Location>Argentina Remote</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>89e6092c-0c9</externalid>
      <Title>Backend Engineer, Privy</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Backend Engineer to join our team at Privy. As a Backend Engineer, you will play a key role in creating software that turns complex technical systems into delightful developer tools. You will assemble tried-and-true primitives into intuitive, responsive APIs and beautiful interfaces.</p>
<p>Our Engineering team believes in open-source work and transparency with our teammates and users. We encourage each other to think big, run experiments and follow our curiosity so we can build better tooling that lets developers shine and empower their users.</p>
<p>Responsibilities:</p>
<ul>
<li>Building and maintaining production systems at scale</li>
<li>Designing and implementing modern APIs</li>
<li>Managing database migrations and infrastructure configuration</li>
<li>Writing maintainable, well-tested, modular code</li>
<li>Collaborating with cross-functional teams to deliver high-quality software</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of experience building and maintaining a production system at scale</li>
<li>Understanding of modern API development best practices and design</li>
<li>Experience in building data models, managing database migrations and best practices, and infrastructure configuration</li>
<li>Ability to thrive in a fast-paced environment</li>
<li>Growth mindset and constant curiosity and fearlessness to dive into the unknown</li>
<li>Excellent written and verbal communication skills, including the ability to write clear technical documentation</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Strong preference for experience in an API-based business and in payments, fintech, or crypto</li>
<li>Written open-source developer tooling</li>
<li>Published about your work (open source code, internal or external presentations, blog posts, etc.)</li>
<li>Past experience working in authentication or security</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>modern API development, database migrations, infrastructure configuration, maintainable code, well-tested code, API-based business, payments, fintech, crypto, open-source developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Privy</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Privy builds simple, flexible developer tooling that enables users to take control of their online presence.</Employerdescription>
      <Employerwebsite>https://Privy</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7235875</Applyto>
      <Location>NYC-Privy</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>f64b88a6-514</externalid>
      <Title>Technical Program Manager, Fixed Term Contract (Tools, Data &amp; Migrations)</Title>
      <Description><![CDATA[<p>We are seeking a Technical Program Manager to join our People &amp; Culture team. As a Technical Program Manager, you will apply program management methodologies to multiple initiatives, balancing progress and risk to deliver sustainable change to meet objectives.</p>
<p>You will work with HR professionals, engineers, data scientists, and leadership stakeholders to define project scope, goals, timelines, and resource requirements.</p>
<p>Key responsibilities:</p>
<ul>
<li>Work with your team to organise and optimise a portfolio of HR software &amp; data projects, including migrations to new systems and applying new AI techniques to tools &amp; data.</li>
<li>Act as the system owner and domain expert for GDM&#39;s HR tooling &amp; people data stack; understand how tools connect to each other, how they interact with Google systems, and how their data is used downstream.</li>
<li>Develop, maintain, and improve reporting &amp; dashboards based on HR system data</li>
<li>Produce user guides, process documentation, and other forms of educational resources.</li>
<li>Collaborate with teams across both GDM and Google to ensure that GDM&#39;s People &amp; Culture priorities are effectively translated into delivery plans, and available resources are used on the most important problems</li>
<li>Work with your team on all required elements of end-to-end project planning and delivery, using your knowledge of project methodologies including tools and techniques such as stand ups, retrospectives, agile boards, project plans etc</li>
<li>Clarify, communicate, and drive decisions where tradeoffs are necessary between HR process priorities and HR tooling &amp; data options.</li>
<li>Effectively translate technical information to non-technical audiences, ensuring clarity and alignment</li>
<li>Work effectively across international teams and various stakeholders.</li>
<li>Track progress, maintain up to date information and relevant technical documentation, anticipate and propose solutions to issues and risks</li>
<li>Troubleshoot and resolve technical challenges, making sound decisions that balance feasibility and project timelines.</li>
<li>Critically evaluate technical proposals, understanding the trade-offs between different approaches, and weigh in on the viability of solutions</li>
<li>Demonstrate a curious mindset and a commitment to learning and understanding your team&#39;s technical field, and your broader understanding of AI</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Passionate about AI &amp; proactive about learning and acquiring knowledge to enhance your domain knowledge</li>
<li>Strong technical understanding of HR tools &amp; systems, including experience with platform migrations</li>
<li>Thrive in collaborative environments, bridging the gap between engineers and stakeholders, ensuring everyone is working towards a shared vision</li>
<li>Introductory understanding of SQL, dashboarding, and generating reports from raw data</li>
<li>Comfortable in a fast-paced, ambiguous environment, taking ownership and driving projects independently</li>
<li>Flexible, adaptable and highly responsive to the needs of the project, team and wider group</li>
<li>Strong communication skills, ability to develop meaningful relationships with key stakeholders and leverage these to influence action and outcomes, ensuring alignment between technical teams and business stakeholders</li>
<li>Natural problem-solver, readily identifying the root causes of technical challenges to implement elegant solutions</li>
<li>Ability and curiosity to use AI tools practically and effectively in your work, with a recognition and awareness of AI&#39;s responsible use, risks, and limitations</li>
<li>BS degree in Computer Science, Engineering, or related technical field</li>
<li>Knowledge of Software Development Lifecycles</li>
<li>Product Management experience (direct or partnering closely)</li>
</ul>
<p>At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, HR tools &amp; systems, Platform migrations, SQL, Dashboarding, Report generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7588406</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>d4c3e8b3-875</externalid>
      <Title>Windows Administrator</Title>
      <Description><![CDATA[<p>We are seeking an experienced Windows Administrator to support the technology initiatives of the IT Infrastructure team at Keywords Studios. As a Windows Administrator, you will be responsible for follow-the-sun delivery and support of related services, prompt reaction on all server and cloud infrastructure incidents as 2nd line support, and cooperation with other infrastructure teams for resolution.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Ensure that all escalated incidents are handled within SLA&#39;s.</li>
<li>Act as expert support for Windows stack related incidents and support requests.</li>
<li>Manage problem resolution with third party vendors.</li>
<li>Participate in Problem management processes.</li>
<li>Support company Windows infrastructure on premise and in a Cloud.</li>
<li>Provide operational administration and configuration support for highly available server landscapes.</li>
<li>Support MS Active Directory, design Group Policies.</li>
<li>Deliver new services according to the business requirements.</li>
<li>Participate in integration projects, ensuring that new existing studios are brought to the latest infrastructure standards.</li>
<li>Identify opportunities for process improvement and efficiency enhancements.</li>
<li>Create and maintain technical documentation on all system designs and configurations, troubleshooting procedures.</li>
<li>Take the ownership of projects to set up or upgrade server infrastructure, with support from the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree in a relevant technical field or equivalent experience.</li>
<li>Strong understanding of Windows stack technologies, standards and trends.</li>
<li>Strong technical background with 3+ years’ experience Windows stack administration.</li>
<li>Very good technical knowledge of the Microsoft Stack, Active Directory and its components, Exchange, VMWare, HyperV, GPOs.</li>
<li>Strong technical knowledge of Storage and Server equipment, virtualization and production setups.</li>
<li>Strong technical knowledge of Cloud Infrastructure, Azure, AWS.</li>
<li>Experience with scripting.</li>
<li>Experience with Backup tools and solutions.</li>
<li>Experience with IT infrastructure migrations.</li>
<li>Strong understanding of Infrastructure change management.</li>
<li>High communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.</li>
<li>Strong analytical and problem-solving skills, with the ability to translate business needs into technical requirements and ability to identify and resolve complex IT infrastructure issues.</li>
<li>Strong decision-making skills.</li>
<li>Strong understanding of gaming industry dynamics and trends, with a passion for gaming.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Private Medical Care</li>
<li>EAP system for supporting wellbeing of Employees</li>
<li>Cafeteria System</li>
<li>Leisure Zones, coffee and fruits in the office</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Windows stack technologies, Microsoft Stack, Active Directory, Exchange, VMWare, HyperV, GPOs, Storage and Server equipment, Virtualization, Cloud Infrastructure, Azure, AWS, Scripting, Backup tools and solutions, IT infrastructure migrations, Infrastructure change management</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Keywords Studios</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Keywords Studios is a global services platform for video games and beyond, providing technical services to leading content creators and publishers.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/4D3EB9D0DF</Applyto>
      <Location>Katowice</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>0e50f5ba-8b9</externalid>
      <Title>Hardware Development Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>Hardware Development Infrastructure Engineer</strong></p>
<p><strong>About the Team:</strong></p>
<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI&#39;s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>
<p><strong>About the Role</strong></p>
<p>We&#39;re looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI&#39;s hardware development lifecycle. You&#39;ll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.</p>
<p>This role sits at the intersection of hardware, cloud, HPC, DevOps, and data. You&#39;ll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Partner with hardware teams on workflows and tooling: Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.</li>
</ul>
<ul>
<li>Build and operate regression systems at scale: Own regressions end-to-end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.</li>
</ul>
<ul>
<li>Own CI/CD for infrastructure and tooling: Design and operate pipelines for infrastructure-as-code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.</li>
</ul>
<ul>
<li>Run cloud and HPC platforms: Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node lifecycles, and cost-performance tradeoffs.</li>
</ul>
<ul>
<li>Build data foundations and visibility: Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.</li>
</ul>
<ul>
<li>Drive operational excellence: Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification).</li>
</ul>
<p>Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.</p>
<ul>
<li>Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure-as-code practices (e.g., Terraform, Bicep; configuration management tools a plus).</li>
</ul>
<p>Strong programming skills (Python preferred) and solid software engineering and scripting practices.</p>
<ul>
<li>Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, GitHub Actions), including testing and release workflows.</li>
</ul>
<ul>
<li>Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.</li>
</ul>
<ul>
<li>Clear communicator with strong judgment—able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience operating Slurm or other large-scale cluster schedulers.</li>
</ul>
<ul>
<li>Experience with enterprise authentication and directory services (e.g., Entra ID, LDAP, FreeIPA, SSSD).</li>
</ul>
<ul>
<li>Experience building or operating backend and middleware systems</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Compensation</strong></p>
<ul>
<li>$260K – $335K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$260K – $335K • Offers Equity</Salaryrange>
      <Skills>chip development workflows, EDA domain, cloud platforms, networking, security, performance, automation, cloud environments, infrastructure-as-code, configuration management tools, programming skills, software engineering, scripting practices, CI/CD systems, testing, release workflows, database experience, schema design, migrations, indexing, operational safety, Slurm, enterprise authentication, directory services, backend and middleware systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that develops and commercializes advanced artificial intelligence (AI) systems. The company was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f2908f94-93a9-476b-ac83-b03392ae827d</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>7ead86a2-459</externalid>
      <Title>Games - Server Programmer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Software Engineer with a primary background in Server Programming to develop features and technology across our online systems. You&#39;ll architect and deliver scalable backend features for a live mobile title, own and evolve our CI/CD and deployment infrastructure, and partner with design, production, client, QA and CS to ship high-quality features safely and at pace.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Develop software features end-to-end.</li>
<li>Architect and improve core online systems (game server, multiplayer engine, session and player-data services) for reliability, performance and cost at scale.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Server-side engineering in C#/.NET (e.g., ASP.NET, Web APIs)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Server-side engineering in C#/.NET, Experienced with databases (SQL and NoSQL) and caching (e.g., Redis): schema design, query optimisation, data migrations, and operational best practices, CI/CD (Jenkins/GitLab), version control (Git/GitLab flows), infrastructure and hosting (on-prem and/or AWS), and observability (logs/metrics/tracing) for live services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Server-Programmer/212311</Applyto>
      <Location>Manchester</Location>
      <Country></Country>
      <Postedate>2026-02-17</Postedate>
    </job>
    <job>
      <externalid>c1e20e4a-3c2</externalid>
      <Title>SAP S/4HANA Projektmanager (all genders)</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll do</strong></p>
<p>You&#39;ll be responsible for the planning, steering, and successful implementation of SAP S/4HANA projects - including budget, time planning, and resource management. You&#39;ll lead and coordinate interdisciplinary, sometimes international project teams, as well as efficiently manage stakeholders.</p>
<ul>
<li>Gesamtverantwortung für die Planung, Steuerung und erfolgreiche Umsetzung von SAP S/4HANA-Projekten – inklusive Budget, Zeitplanung und Ressourcenmanagement</li>
<li>Führung und Koordination interdisziplinärer, teils internationaler Projektteams sowie effizientes Stakeholder Management</li>
<li>Anwendung der SAP Activate Methodology und Durchführung von Fit-to-Standard-Workshops, Test-, Migrations- und Go-Live-Aktivitäten</li>
<li>Sicherstellung der Projektqualität, Risikomanagement und Einhaltung definierter Standards entlang des gesamten Projektlebenszyklus</li>
</ul>
<p><strong>What you need</strong></p>
<p>To be successful in this role, you&#39;ll need the following qualifications:</p>
<ul>
<li>Abgeschlossenes Studium und mind. 6 Jahre Berufserfahrung in der Beratung, idealerweise im SAP S/4HANA Umfeld</li>
<li>Fundierte Kenntnisse moderner Projektmanagementmethoden (z. B. SAP Activate) und ein tiefgehendes technisches Verständnis für SAP-Technologien</li>
<li>Nachweisliche Erfahrung in der Leitung, Steuerung und erfolgreichen Umsetzung komplexer End-to-End S/4HANA-Transformationsprojekte, idealerweise im internationalen Umfeld</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role is crucial for the successful implementation of SAP S/4HANA projects, which are essential for the digital transformation of our customers. As a project manager, you&#39;ll play a key role in ensuring the quality and success of these projects.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SAP S/4HANA, Project Management, Leadership, Stakeholder Management, Risk Management, SAP Activate, Fit-to-Standard-Workshops, Test-, Migrations- und Go-Live-Aktivitäten</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>MHP - A Porsche Company</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain. As a digitalization pioneer in the sectors of mobility and manufacturing, MHP transfers its expertise to various industries and is the premium partner for thought leaders on the way to a better tomorrow.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=12113</Applyto>
      <Location>Deutschlandweit</Location>
      <Country></Country>
      <Postedate>2025-12-08</Postedate>
    </job>
  </jobs>
</source>