<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>22bcbb50-ef4</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimisation skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7772798b-532</externalid>
      <Title>Staff Software Engineer - Java( Backend Architect )</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Staff Software Engineer to work on our Advanced Apps team with focus on enhancing and managing connectors to SaaS applications e.g., Workday, Salesforce, GCP, AWS, etc. They will work closely with the Lifecycle Management (LCM) team that provides a platform for automating Joiner, Mover, Leaver processes. The Connectors allow customers the flexibility to Import and Provision identity and entitlements to their SaaS applications. This role is to build, design solutions, and maintain our connectors to match application&#39;s features and for scale.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Interface with cross-functional teams (Architects, QA, Product, Technical Support, Documentation, and UX teams) to understand application specific protocols and build connectors</li>
<li>Analyze/Refine Requirements with Product Management.</li>
<li>Quick prototyping to validate scale and performance.</li>
<li>Design &amp; Implement features with functional and unit tests along with monitoring and alerts</li>
<li>Conduct code reviews, analysis and performance tuning</li>
<li>Work with QA team to outline and implement comprehensive test coverage for application specific features</li>
<li>Troubleshooting and support for customer issues and debugging from logs (Splunk, Syslogs, etc.)</li>
<li>Provide technical leadership and mentorship to more junior engineers</li>
</ul>
<p>Required knowledge, skills, and abilities:</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>8+ years of Software Development in Java, preferably significant experiences with SCIM and Spring Boot.</li>
<li>5+ years of development experience building services, internal tools and frameworks</li>
<li>2+ years experience automating and deploying large scale production services in AWS, GCP or similar.</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures</li>
<li>Experience with RESTful APIs and SOAP apis.</li>
<li>Ability to work effectively with distributed teams and people of various backgrounds</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, SCIM, Spring Boot, AWS, GCP, RESTful APIs, SOAP apis, Caching, Stream processing, Resilient architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is the leading independent provider of enterprise identity. The company provides a platform for organisations to securely connect the right people to the right technologies at the right time.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6883425</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9f403a5-e14</externalid>
      <Title>Staff Engineer, Data Services</Title>
      <Description><![CDATA[<p>The Data Platform Team at CoreWeave is seeking a Staff Software Engineer with specialization in database and stream processing to help fulfill the goal of our global datastore strategy and establish communication models for our data flow.</p>
<p>As a member of the Data Platform Team, you will have the opportunity to drive technical decision-making to accelerate delivery, mentor engineers, and grow team capability. You will champion event-driven architecture adoption and build consensus across the organization. You will participate in the company&#39;s data infra strategy planning and initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs.</li>
<li>Develop a stream processing architecture and solve for scalability and reliability.</li>
<li>Improve the performance, security, reliability, and scalability of our data platforms, and related services and participate in the teams on-call rotation.</li>
<li>Establish guidelines, guard rails for data access and storage for stakeholder teams.</li>
<li>Ensure compliance with standards for data protection regulation.</li>
</ul>
<p>To be successful in this role, you will need to have 12+ years of software engineering experience. You should understand the CAP theorem and concurrency models, and be able to clearly define data models and establish guidelines around data management. You should be familiar with one of the distributed NewSQL datastores such as CockroachDB, TiDB, YDB, Yugabyte and/or stream processing tools such as NATS or Kafka.</p>
<p>Additionally, you should have experience with designing and operating systems at scale, API designs and microservices, and Kubernetes and have interest or experience with using it for event-driven and/or stateful orchestration. You should be excited to have the opportunity to contribute to a Kubernetes operator in order to manage data systems.</p>
<p>The base salary range for this role is $188,000 to $250,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>database, stream processing, NewSQL datastores, Kubernetes, API designs, microservices, event-driven architecture, scalability, reliability, security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4592097006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a7d0cf0f-a3a</externalid>
      <Title>Senior Engineer- Data Platforms</Title>
      <Description><![CDATA[<p>The Data Platform Team serves as the experts on managing data infrastructure for CoreWeave. Our data infrastructure includes managed databases, data ingestion, data flow, data lakes, and other data retrieval for CoreWeave and its customers.</p>
<p>We are seeking senior software engineers with specialization in database and stream processing who can help us fulfill the goal of our global datastore strategy and establish communication models for our data flow. This individual will work with a team of mixed skilled engineers and have the opportunity to work on the full range of rewarding challenges that come with the business of building a cloud in a communicative, supportive, and high-performing environment.</p>
<p>As a member of the Data Platform Team you will have the opportunity to:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>
<li>Participate in operations and scaling of relational data platforms</li>
<li>Develop a stream processing architecture and solve for scalability and reliability</li>
<li>Improve the performance, security, reliability, and scalability of our data platforms and related services, and participate in the team’s on-call rotation</li>
<li>Establish guidelines and guard rails for data access and storage for stakeholder teams</li>
<li>Ensure compliance with standards for data protection regulation</li>
<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself</li>
</ul>
<p>The ideal candidate will have 5+ years of experience in a software or infrastructure engineering industry, with experience operating services in production and at scale and familiarity with reliability engineering concepts such as different types of testing, progressive deployments, error budgets, observability, and fault-tolerant design.</p>
<p>The base salary range for this role is $175,000 to $210,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 to $210,000</Salaryrange>
      <Skills>database and stream processing, managed databases, data ingestion, data flow, data lakes, APIs, operational experience, reliability engineering, testing, progressive deployments, error budgets, observability, fault-tolerant design, Kubernetes, Go, Linux distributions, shell scripting, Linux storage and networking stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI degradation. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4562276006</Applyto>
      <Location>Bellevue, WA / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b687767a-7a1</externalid>
      <Title>Director of Engineering, Security Risk Management</Title>
      <Description><![CDATA[<p>We&#39;re seeking an exceptional Engineering Lead to drive the evolution of GitLab&#39;s Security Risk Management (SRM) stage into a world-class platform for vulnerability analysis and remediation at enterprise scale.</p>
<p>This is a rare opportunity to architect and build distributed systems that will fundamentally change how large organisations approach application security and developer security workflows.</p>
<p>As the SRM Stage Lead, you&#39;ll be responsible for transforming our engineering culture toward high-performance distributed systems while delivering an exceptional user experience for both Application Security professionals and Developers.</p>
<p>You&#39;ll own the technical strategy for processing, analysing, and remediating vulnerabilities across massive codebases and complex enterprise environments.</p>
<p><strong>Technical Leadership &amp; Architecture</strong></p>
<ul>
<li>Design distributed systems architecture capable of processing vulnerability data from thousands of repositories, millions of commits, and complex dependency graphs in real-time</li>
<li>Drive storage system decisions for multi-petabyte security datasets, balancing query performance, cost efficiency, and data retention requirements across time-series, graph, and document storage paradigms</li>
<li>Architect scalable analysis pipelines that can ingest vulnerability feeds, correlate findings across multiple security tools, and provide actionable intelligence to both security teams and individual developers</li>
<li>Lead the technical evolution from monolithic security scanning to microservices-based, event-driven vulnerability management systems</li>
</ul>
<p><strong>Engineering Culture Transformation</strong></p>
<ul>
<li>Champion high-performance systems thinking throughout the team, establishing patterns for horizontal scaling, efficient resource utilisation, and fault-tolerant distributed computing</li>
<li>Establish technical standards for system observability, chaos engineering, and performance optimisation in security-critical systems</li>
<li>Mentor and develop senior engineers in distributed systems design, database optimisation, and large-scale system architecture</li>
<li>Drive architectural decision records (ADRs) for major technical decisions, particularly around data storage, processing frameworks, and system boundaries</li>
</ul>
<p><strong>Product &amp; User Experience Excellence</strong></p>
<ul>
<li>Own the end-to-end user journey (in partnership with PM) for both AppSec professionals managing enterprise-wide risk and developers receiving actionable security feedback in their workflow</li>
<li>Design APIs and interfaces that abstract complexity while providing the power and flexibility that security professionals demand</li>
<li>Collaborate with Product Management, UX and Product Design to translate complex technical capabilities into intuitive user experiences</li>
<li>Establish feedback loops with large enterprise customers to ensure our technical solutions scale with their organisational complexity</li>
</ul>
<p><strong>Strategic Technical Execution</strong></p>
<ul>
<li>Evaluate and integrate cutting-edge technologies in areas such as graph databases, stream processing, machine learning inference at scale, and distributed caching, in collaboration with GitLab’s Infrastructure, Data and AI teams</li>
<li>Own the technical roadmap for vulnerability correlation, risk scoring, and automated remediation workflows</li>
<li>Drive partnerships with other GitLab stages to ensure seamless integration across the DevSecOps platform</li>
<li>Lead incident response for availability and performance issues in customer-facing security systems</li>
</ul>
<p><strong>What You’ll Bring</strong></p>
<ul>
<li>10+ years of software engineering experience with 5+ years leading distributed systems at scale (&gt;100M daily operations)</li>
<li>Deep expertise in designing and operating high-throughput, low-latency distributed systems with complex data models</li>
<li>Proven experience with polyglot persistence strategies, including relational databases (PostgreSQL, Cloud Spanner), time-series databases, graph databases, and distributed key-value stores</li>
<li>Strong background in stream processing frameworks (Apache Kafka, Apache Flink, or similar) and event-driven architectures</li>
<li>Hands-on experience with container orchestration (Kubernetes) and cloud-native observability stacks</li>
<li>Security domain knowledge with understanding of vulnerability assessment, static analysis, dependency scanning, or application security testing</li>
</ul>
<p><strong>Leadership &amp; Communication</strong></p>
<ul>
<li>Proven track record of leading and growing high-performing engineering teams (40+ engineers)</li>
<li>Experience transforming engineering culture and establishing technical excellence standards in fast-growing organisations</li>
<li>Strong technical communication skills with ability to present complex architectural decisions to executive stakeholders</li>
<li>Collaborative leadership style with experience working across multiple engineering teams and product stakeholders</li>
</ul>
<p><strong>Problem-Solving &amp; Innovation</strong></p>
<ul>
<li>Systems thinking approach to complex technical problems with demonstrated ability to make appropriate trade-offs between performance, scalability, and maintainability</li>
<li>Experience with A/B testing frameworks and data-driven decision making in technical contexts</li>
<li>Track record of successfully delivering large-scale technical migrations or architectural transformations</li>
<li>Startup or high-growth company experience with ability to balance technical debt with rapid feature delivery</li>
</ul>
<p><strong>About the team</strong></p>
<p>Security Risk Management sits at the heart of modern DevSecOps. The systems you build will directly impact how Fortune 500 companies protect their applications and how millions of developers integrate security into their daily workflow.</p>
<p>You&#39;ll have the opportunity to define the future of application security tooling while working with some of the most challenging distributed systems problems in the industry.</p>
<p>The Technical Challenge</p>
<p>You&#39;ll be solving some of the most interesting distributed systems problems in the security space:</p>
<ul>
<li>Scale: Processing vulnerability data for organisations with 100,000+ repositories and millions of developers</li>
<li>Performance: Sub-second query response times for complex security analytics across massive datasets</li>
<li>Reliability: 99.95%+ uptime SLAs for security-critical workflows that can&#39;t afford downtime</li>
<li>Complexity: Correlating findings across 20+ different security tools while maintaining data lineage and audit trails</li>
<li>User Experience: Making complex security data accessible to both security experts and developers with varying security expertise</li>
</ul>
<p><strong>Salary</strong></p>
<p>The base salary range for this role’s listed level is currently for residents of the United States.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Base salary range for this role’s listed level is currently for residents of the United States.</Salaryrange>
      <Skills>Distributed systems, Polyglot persistence strategies, Stream processing frameworks, Event-driven architectures, Container orchestration, Cloud-native observability stacks, Security domain knowledge, Vulnerability assessment, Static analysis, Dependency scanning, Application security testing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is anデvelopment platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8195921002</Applyto>
      <Location>Remote, Canada; Remote, EMEA; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ff568ca-d59</externalid>
      <Title>Senior Software Engineer - Data Infrastructure Services</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a senior software engineer to join its Data Platforms Team. The ideal candidate will have experience in database and stream processing, and will be responsible for designing and implementing the platform to deliver data to teams with a focus on providing managed solutions through APIs.</p>
<p>The successful candidate will participate in operations and scaling of relational data platforms, develop a stream processing architecture, and improve the performance, security, reliability, and scalability of our data platforms and related services. They will also establish guidelines, guardrails for data access and storage for stakeholder teams, and ensure compliance with standards for data protection regulation.</p>
<p>In addition to technical skills, the ideal candidate will be able to grow, change, invest in their teammates, be invested-in, share their ideas, listen to others, be curious, have fun, and be themselves. CoreWeave values diversity and inclusion, and encourages candidates from all backgrounds to apply.</p>
<p>Key responsibilities:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>
</ul>
<ul>
<li>Participate in operations and scaling of relational data platforms</li>
</ul>
<ul>
<li>Develop a stream processing architecture</li>
</ul>
<ul>
<li>Improve the performance, security, reliability, and scalability of our data platforms and related services</li>
</ul>
<ul>
<li>Establish guidelines, guardrails for data access and storage for stakeholder teams</li>
</ul>
<ul>
<li>Ensure compliance with standards for data protection regulation</li>
</ul>
<ul>
<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and be yourself</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in a software or infrastructure engineering industry</li>
</ul>
<ul>
<li>Experience operating services in production and at scale</li>
</ul>
<ul>
<li>Familiarity with one of the distributed NewSQL datastores such as CockroachDB, TiDB, YDB, Yugabyte and/or stream processing tools such as NATS or Kafka</li>
</ul>
<ul>
<li>Experience with designing and operating these systems at scale</li>
</ul>
<ul>
<li>Familiarity with Kubernetes and have interest or comfortable with using it for event-driven and/or stateful orchestration</li>
</ul>
<ul>
<li>Proficiency in Go/Python/Java and interested in contributing to open source</li>
</ul>
<p>ExperienceLevel: senior</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>database and stream processing, API design and implementation, operational and scaling of relational data platforms, stream processing architecture, performance, security, reliability, and scalability of data platforms, data access and storage guidelines, data protection regulation compliance, Kubernetes, Go/Python/Java, open source contribution</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671479006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb421081-0b2</externalid>
      <Title>Senior Software Engineer - Lifecycle Management</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Staff Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta’s identity management product.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Be a key contributor in the implementation of the LCM infrastructure</li>
<li>Troubleshooting customer issues and debugging from logs (Splunk, Syslogs, etc.)</li>
<li>Design &amp; Implement features with functional and unit tests along with monitoring and alerts</li>
<li>Conduct design &amp; code reviews, analysis and performance tuning</li>
<li>Quick prototyping to validate scale and performance</li>
<li>Provide technical leadership and mentorship to more junior engineers</li>
<li>Interface with Architects, QA, Product Owners, Engineering Services, Tech Ops</li>
<li>Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work</li>
</ul>
<p>Required knowledge, skills, and abilities:</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>4+ years of Software Development in Java, preferably significant experiences with Hibernate and Spring Boot</li>
<li>2+ years of development experience building services, internal tools and frameworks</li>
<li>2+ years experience automating and deploying large scale production services in AWS, GCP or similar</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures</li>
<li>Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB</li>
<li>Ability to work effectively with distributed teams and people of various backgrounds</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with server-side technologies including caching, asynchronous processing, and multi-threading.</li>
<li>Experience in TDD.</li>
<li>Experience with UI development or javascript frameworks</li>
<li>Knowledge of Identity and Access Management protocols and technologies: OAuth, OpenID Connect, SAML, SCIM</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Hibernate, Spring Boot, AWS, GCP, Caching, Stream Processing, Resilient Architectures, Relational Databases, MySQL, PostgreSQL, GraphDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is the leading independent provider of enterprise identity, enabling organisations to securely connect the right people to the right technologies at the right time.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6879868</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4f71a295-9c1</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced Staff Software Engineer to work on our next-generation Imports Platform team. The Imports Platform team is leading a strategic initiative to modernize Okta&#39;s identity lifecycle management capabilities by architecting and migrating from a legacy monolithic system to a highly scalable, distributed microservices platform.</p>
<p>As a Staff Software Engineer on the Imports Platform team, you will be a technical leader who independently owns projects end-to-end, from ideation and architectural design through implementation, deployment, and operational excellence. You will drive technical strategy, make critical architectural decisions, and influence both your immediate team and cross-team initiatives.</p>
<p>You will work on complex distributed systems challenges including massive-scale batch processing, real-time synchronization, and user matching algorithms that serve thousands of enterprise customers. This role requires strong technical leadership, strategic thinking, and the ability to balance short-term delivery with long-term platform health.</p>
<p>You will mentor senior and junior engineers, partner with Product Management on feature strategy, and help shape the future of Okta&#39;s Imports platform. This is a hybrid position requiring a blend of remote and in-office collaboration.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide technical leadership on major development projects, including architectural design and implementation strategy</li>
<li>Independently own and deliver projects end-to-end within the team, including technical prioritization and tradeoffs</li>
<li>Generate design ideas and solutions for ambiguous problems, taking complete ownership from conception through production impact</li>
<li>Design and architect core, high-performance, scalable software components with full ownership of all production aspects (scalability, reliability, monitoring, alerting, resource efficiency, testing, documentation)</li>
<li>Lead technical design discussions and guide the team in making architectural decisions</li>
<li>Drive the migration strategy from monolithic to microservices architecture, including planning, scoping, and execution</li>
<li>Interface extensively with cross-functional teams (Architects, QA, Product, Technical Support, Documentation, UX, and SRE) to deliver comprehensive import and sync solutions</li>
<li>Analyze and refine requirements with Product Management, partnering on product features and helping define the &#39;how&#39;</li>
<li>Conduct code reviews with focus on systems design, reliability, performance, scalability, security, and maintainability</li>
<li>Share knowledge widely, coordinate across teams, and manage risk and dependencies for projects</li>
<li>Work with QA and SRE teams to define comprehensive testing strategies and operational excellence practices</li>
<li>Independently troubleshoot complex production incidents spanning the home team, perform root cause analysis, and drive operational improvement projects</li>
<li>Use data and metrics to drive technical decisions and validate the impact of architectural changes</li>
<li>Mentor and provide technical guidance to senior and junior engineers on the team</li>
<li>Help resolve difficult customer issues and work closely with Field teams and CSMs to identify patterns and drive product improvements</li>
<li>Participate in group strategy discussions and help break down strategic initiatives into actionable technical milestones</li>
<li>Proactively identify and advocate for improvements in team velocity, engineering practices, and operational processes</li>
<li>Drive improvements in observability, monitoring, and production support capabilities</li>
</ul>
<p>Required Knowledge, Skills, and Abilities:</p>
<ul>
<li>7+ years of software development experience building highly-reliable, mission-critical software at scale</li>
<li>Deep expertise with object-oriented languages, particularly Java, with proven ability to architect large-scale systems</li>
<li>Expert-level knowledge of Spring Boot framework, Maven, and modern Java development practices</li>
<li>Deep understanding of infrastructure-level technologies: distributed systems, caching strategies, stream processing, resilient architectures</li>
<li>Solid experience with data stores including relational databases (MySQL), caching layers (Redis), and cloud storage (S3)</li>
<li>Experience with one or more Directory services: Active Directory, LDAP, Office 365, Azure AD</li>
<li>Strong experience with RESTful APIs, gRPC, and microservices architecture patterns</li>
<li>Proven track record of working with systems at massive scale, including batch processing and real-time sync capabilities</li>
<li>Experience with cloud platforms (AWS, GCP) including services like SQS, S3, and multi-region architectures</li>
<li>Strong understanding of distributed job processing, message queues, and event-driven architectures</li>
<li>Demonstrated ability to lead technical projects independently and influence cross-team initiatives</li>
<li>Excellent communication skills with ability to share information widely and coordinate across teams</li>
<li>Strong mentorship capabilities with experience guiding senior and junior engineers</li>
<li>Customer-focused mindset with experience working with Field teams to resolve complex issues</li>
<li>Strategic thinking ability to participate in and contribute to platform strategy</li>
<li>Experience with operational excellence including incident management, root cause analysis, and driving systemic improvements</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Experience with Protocol Buffers (Protos) and building event-driven systems</li>
<li>Experience with server-side technologies including advanced caching, asynchronous processing, multi-threading, and concurrency patterns</li>
<li>Experience in Test-Driven Development (TDD) and automated testing strategies</li>
<li>Deep knowledge of Identity and Access Management protocols and technologies: OAuth, OpenID Connect, SAML, SCIM, LDAP</li>
<li>Experience with Microsoft Azure management APIs, Microsoft Graph API, Office 365, or ADFS</li>
<li>Experience automating and deploying large-scale production services in AWS, GCP, or similar cloud platforms</li>
<li>Experience with feature flag frameworks and gradual rollout strategies for large-scale migrations</li>
<li>Understanding of user matching, correlation algorithms, and identity resolution at scale</li>
<li>Experience with observability platforms, creating comprehensive monitoring and alerting strategies</li>
<li>Experience migrating monolithic applications to microservices architecture</li>
<li>Knowledge of data modeling for graph databases and relationship management</li>
<li>Experience with incremental sync, delta detection, and change data capture patterns</li>
<li>Background in building resilient systems with retry logic, circuit breakers, and failure handling</li>
<li>Experience with performance optimization and capacity planning for high-throughput systems</li>
</ul>
<p>Education and Training:</p>
<p>B.S. Computer Science or related field</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, Maven, distributed systems, caching strategies, stream processing, resilient architectures, relational databases, caching layers, cloud storage, Directory services, RESTful APIs, gRPC, microservices architecture patterns, batch processing, real-time sync capabilities, cloud platforms, distributed job processing, message queues, event-driven architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7725948</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>549fc0bc-10b</externalid>
      <Title>Software Architect, Lifecycle Management</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Okta is an enterprise-grade identity management platform, built from the ground up in the cloud and delivered with an unwavering focus on customer success. With Okta, organisations can manage access across any application, person or device. Whether the people are employees, partners or customers or the applications are in the cloud, on premises or on a mobile device, Okta helps organisations become more secure, make people more productive, and maintain compliance.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Principal Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta’s identity management product.</p>
<p>Ideal candidate should be Hands-on expert developer in Java who is deeply technical with a passion for building high-quality, secure, and performant applications and frameworks. Demonstrable experience leading technical projects involving more than 20 engineers across multiple workstreams Excited by the opportunity to work on cutting-edge security and identity management challenges and are a thought leader who can drive technical strategy and mentor other engineers.</p>
<p>A collaborative individual with excellent communication skills, capable of working with cross-functional teams to deliver on a shared vision. Not just be a builder; but a force multiplier who can create frameworks and solutions that enable other teams to be more productive.</p>
<p>This role is to build, design solutions, and maintain our platform for scale. The ideal candidate is someone who has experience building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure.</p>
<p>Job Duties And Responsibilities</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Lead the architectural design and implementation of new features and services, with a focus on scalability, performance, and security.</li>
<li>Collaborate with product managers, architects, and other engineering teams to define the technical strategy and lead the prototyping of software components.</li>
<li>Directly oversee and coordinate complex technical initiatives involving 20+ engineers, ensuring alignment across disparate sub-teams</li>
<li>Drive a culture of engineering excellence and continuous improvement, with a focus on robust testing, monitoring, and operational excellence.</li>
<li>Stay up-to-date with the latest industry trends and technologies in identity, security, and distributed systems.</li>
<li>Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work.</li>
</ul>
<p>Required Knowledge, Skills, And Abilities</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>15+ years of Software Development in Java, preferably significant experience with Hibernate and Spring Boot</li>
<li>A deep understanding of design patterns, scalability patterns, security engineering, and object-oriented principles.</li>
<li>4+ years experience automating and deploying large scale production services in AWS, GCP or similar</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures.</li>
<li>Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB</li>
<li>Strong communication skills and the ability to work across functions, distributed teams.</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with server-side technologies including caching, asynchronous processing, and multi-threading.</li>
<li>Experience with security best practices and threat modeling</li>
<li>Knowledge of Identity and Access Management protocols and technologies: OAuth, OpenID Connect, SAML, SCIM</li>
</ul>
<p>Education</p>
<ul>
<li>B.E. Computer Science or equivalent</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Hibernate, Spring Boot, design patterns, scalability patterns, security engineering, object-oriented principles, AWS, GCP, caching, stream processing, resilient architectures, relational databases, MySQL, PostgreSQL, GraphDB, communication skills, leadership skills, mentoring skills, server-side technologies, asynchronous processing, multi-threading, security best practices, threat modeling, Identity and Access Management protocols, OAuth, OpenID Connect, SAML, SCIM</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is an enterprise-grade identity management platform that provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7771673</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b657c4e-8a1</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute. You will take ownership of infrastructure components that process trillions of events daily, driving the scalability, performance, and reliability of the systems that power product and ML workloads across the company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimize multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimize distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimization skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, stream processing, large-scale data platforms, Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, debugging, profiling, performance optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3452e2c3-f0f</externalid>
      <Title>FBS Senior System Architect - Mainframe</Title>
      <Description><![CDATA[<p><strong>FBS Senior System Architect - Mainframe</strong></p>
<p>Capgemini is looking for a Senior System Architect specializing in Mainframe technologies to join our dynamic team.</p>
<p>As a Senior System Architect, you will play a pivotal role in designing and optimizing mainframe systems architectures that meet complex business requirements. You will leverage your extensive experience to create innovative solutions that enhance system performance and scalability.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design, develop, and implement mainframe architecture solutions that align with organizational strategic goals.</li>
<li>Collaborate with cross-functional teams to gather and analyze business requirements, and translate them into efficient technical architectures.</li>
<li>Evaluate existing systems and recommend improvements to ensure high levels of performance, availability, and security.</li>
<li>Lead technical discussions, and provide mentorship to junior architects and developers.</li>
<li>Conduct proof of concepts (POCs) to demonstrate the viability of proposed designs and solutions.</li>
<li>Stay abreast of emerging trends and technologies in mainframe systems architecture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Work Experience in This Field</li>
</ul>
<ul>
<li>Minimum Required: 9 years of experience in system architecture with a focus on mainframe technologies.</li>
</ul>
<ul>
<li>Education</li>
</ul>
<ul>
<li>Minimum Required: Bachelor&#39;s degree in Computer Science, Information Technology, or a related field.</li>
</ul>
<ul>
<li>Other Critical Skills</li>
</ul>
<ul>
<li>Strong knowledge of mainframe technologies such as COBOL, JCL, CICS, and DB2 - Advanced.</li>
<li>Specialty Insurance - P&amp;C - Advanced</li>
<li>Policy Processing - Advanced</li>
<li>Downstream Processing - Advance</li>
<li>Policy Billing - Advanced</li>
<li>Excellent problem-solving and analytical skills - Advanced.</li>
<li>Strong communication and interpersonal skills - Advanced.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>mainframe technologies, COBOL, JCL, CICS, DB2, specialty insurance, policy processing, downstream processing, policy billing, problem-solving, analytical skills, communication skills, interpersonal skills</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is one of the United States&apos; largest insurers, providing a wide range of insurance and financial services products with gross written premium well over US$25 Billion (P&amp;C).</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nYPMdE4VFQc4A9UynLqT8x/hybrid-fbs-senior-system-architect---mainframe-in-pune-at-capgemini</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b55675c9-0db</externalid>
      <Title>Head of Engineering (Platform)</Title>
      <Description><![CDATA[<p><strong>Head of Engineering (Platform)</strong></p>
<p>Fuse Energy is seeking a Head of Engineering (Platform) to lead the development of our core backend systems and platform infrastructure. As a key member of our team, you will own the architecture and scalability of the platform, ensuring we build robust, high-performance systems that enable rapid product iteration and exceptional customer experiences.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the backend platform architecture, infrastructure, and foundational services</li>
<li>Drive the evolution of our platform to support scale, performance, and reliability</li>
<li>Build a real-time digital twin of renewable generation and customer demand</li>
<li>Design and manage high-volume data pipelines for energy consumption and system telemetry</li>
<li>Lead the development of integration layers and messaging interfaces with third-party services</li>
<li>Establish engineering best practices for observability, CI/CD, testing, and scalability</li>
<li>Partner closely with product and backend teams to support rapid development cycles</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Proven track record as a senior software engineer or tech lead, ideally with platform/backend focus</li>
<li>5+ years experience in software engineering, with 2+ years in a leadership role</li>
<li>Experience building and operating production-grade systems at scale</li>
<li>Strong understanding of system design, distributed computing, and cloud infrastructure</li>
<li>Clear and proactive communication, with the ability to align cross-functional teams</li>
<li>Hands-on approach to solving problems and making strategic decisions</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with Infrastructure as Code (e.g., AWS CDK, Terraform)</li>
<li>Experience with event-driven architecture, messaging queues, or stream processing</li>
<li>Familiarity with building internal platforms or developer tooling</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend platform architecture, infrastructure as code, event-driven architecture, messaging queues, stream processing, system design, distributed computing, cloud infrastructure, AWS CDK, Terraform, CI/CD, testing, scalability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup that aims to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dSZh2emP6XmnvYfQnTTL5q/hybrid-head-of-engineering-(platform)-in-london-at-fuse-energy</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>1b059610-8db</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Software Engineer II at their Redmond office. This role sits at the heart of software development, turning code into maintainable, extensible software that is resilient to change. You&#39;ll work directly with leadership to shape the company&#39;s direction in the software development space.</p>
<p><strong>About the Role</strong></p>
<p>The Software Engineer II will contribute to the design and architecture of software solutions, create design documents, and ensure alignment with security, privacy, and compliance requirements. They will implement maintainable, extensible code and participate in reviews that uphold Microsoft engineering standards. The role will also involve developing and refining test plans, integrating automation, and ensuring robust test coverage for backend services.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Understand User Requirements – Collaborate with product managers and technical leads to clarify requirements and incorporate continuous feedback loops.</li>
<li>Design and Architecture – Contribute to solution architecture, create design documents, and ensure alignment with security, privacy, and compliance requirements.</li>
<li>Coding and Code Quality – Implement maintainable, extensible code and participate in reviews that uphold Microsoft engineering standards.</li>
<li>Testing and Automation – Develop and refine test plans, integrate automation, and ensure robust test coverage for backend services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience in AI/ML frameworks such as PyTorch or TensorFlow and practical experience applying Data Science techniques.</li>
<li>Experience in big data systems such as Spark/PySpark or Stream Processing Systems.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Model a growth mindset by learning from others and sharing your learnings with others.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, C#, Java, Python, AI/ML, Data Science, Big Data, PyTorch, TensorFlow, Spark/PySpark, Stream Processing Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that empowers every person and every organization on the planet to achieve more. They come together with a growth mindset, innovate to empower others, and collaborate to realize their shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/software-engineer-ii/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>