<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>61234903-9fa</externalid>
      <Title>Engineering Manager (Java or Typescript) - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>
<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>
<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>
<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>
<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>
</ul>
<ul>
<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>
</ul>
<ul>
<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>
</ul>
<ul>
<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>
</ul>
<ul>
<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>
</ul>
<ul>
<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>
</ul>
<ul>
<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>
</ul>
<ul>
<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>
</ul>
<ul>
<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>
</ul>
<ul>
<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>
</ul>
<ul>
<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>
</ul>
<ul>
<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>
</ul>
<ul>
<li>Take full responsibility for delivering impactful features to millions of users annually.</li>
</ul>
<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience building and implementing backend services and/or frontend applications.</li>
</ul>
<ul>
<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>
</ul>
<ul>
<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>
</ul>
<ul>
<li>Love for building world-class products with a great user experience.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search and booking services for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/1558189</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07c95966-8e7</externalid>
      <Title>Backend Developer - Host Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Backend Developer and become part of the team that brings new vacation rental properties to life on Holidu.</p>
<p>You&#39;ll be working at the heart of our property acquisition engine , where we take hosts from their very first sign-up all the way to their first booking, making that journey as fast and seamless as possible.</p>
<p>This team sits at a uniquely strategic intersection of product and growth. You will build and optimize the systems that every new host flows through: from onboarding and listing creation, to property configuration, content quality, and referral programs.</p>
<p>The work demands reliability and attention to detail , because the time between a host signing up and welcoming their first guest, and how well their property performs from day one, is directly shaped by the quality of what you build.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and contribute to shaping the team&#39;s direction as you grow.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Early experience delivering product impact through engineering , you&#39;ve shipped things that real users depend on.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS, Kubernetes, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, connecting hosts with millions of guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2589679</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3b63dd5-0f6</externalid>
      <Title>Backend utvecklare</Title>
      <Description><![CDATA[<p>We are seeking an experienced backend developer to join our tech team. As a backend developer, you will be responsible for designing, developing, and maintaining the server-side of our applications and systems. You will work closely with our frontend developers, designers, and product owners to ensure a seamless integration between frontend and backend.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop scalable and efficient backend solutions for our digital platforms.</li>
<li>Write clean, readable, and reusable code.</li>
<li>Perform unit testing and debugging to ensure high quality and reliability.</li>
<li>Participate in technical discussions and contribute ideas to improve the product&#39;s performance and functionality.</li>
<li>Collaborate with frontend developers and other team members to ensure a smooth user experience.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience in backend development with a focus on web applications.</li>
<li>Good knowledge of programming languages such as Python, Java, or similar.</li>
<li>Experience working with frameworks such as Django, Flask, Spring, or similar.</li>
<li>Familiarity with database management systems such as MySQL, PostgreSQL, or similar.</li>
<li>Knowledge of API design and implementation.</li>
<li>Strong problem-solving skills and ability to work independently as well as in a team.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Attractive salary based on experience and competence.</li>
<li>Opportunity to work with exciting projects and the latest technology.</li>
<li>Flexible working hours and possibility of remote work.</li>
<li>Continuous professional development and opportunities for career growth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>On-site</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend development, web applications, Python, Java, Django, Flask, Spring, MySQL, PostgreSQL, API design, problem-solving, cloud services, AWS, Google Cloud, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Transportation</Industry>
      <Employername>Scandinavian Airlines</Employername>
      <Employerlogo>https://logos.yubhub.co/scandinavianairlines.teamtailor.com.png</Employerlogo>
      <Employerdescription>Scandinavian Airlines is an airline company that operates flights across the world.</Employerdescription>
      <Employerwebsite>https://scandinavianairlines.teamtailor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://scandinavianairlines.teamtailor.com/jobs/4882026-backend-utvecklare</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f6deb282-e3c</externalid>
      <Title>Senior Backend Developer (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Senior Backend Developer and become part of the team that powers how our hosts&#39; vacation rentals reach the world.</p>
<p>You&#39;ll be working at the core of our distribution engine - where we take tens of thousands of homes and make them bookable on major travel platforms such as Holidu, Booking.com, Airbnb, VRBO, HomeToGo, and Check24.</p>
<p>This team operates in one of the most technically dynamic areas of our product. You will work with systems that synchronize large volumes of updates at high speed and maintain high availability, while integrating with a wide variety of partner APIs - each with its own structure and complexity.</p>
<p>It&#39;s work that demands precision, scalability, and smart engineering decisions, and it plays a crucial role in helping our hosts reach millions of guests worldwide.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and actively shape the team&#39;s direction , not just execute on it.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
<li>Ensure our applications are highly scalable, capable of handling tens of thousands of properties and millions of bookings.</li>
<li>Work with data persistence - whether in PostgreSQL, Redis, S3, or new state-of-the-art technologies you help us evaluate.</li>
<li>Ship to production daily , deploying to our AWS Kubernetes cluster is part of the routine, not a special occasion.</li>
<li>Own the reliability of your services , set up monitoring, define SLOs, and drive incident resolution so your team can move fast with confidence.</li>
<li>Collaborate in a supportive, cross-functional team that values knowledge sharing and improving together.</li>
<li>Apply engineering best practices, and stay curious by experimenting with new technologies.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Proven track record of delivering product impact through engineering , not just building services, but solving real problems for users.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS-hosted Kubernetes cluster, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a company that powers how vacation rentals reach the world, with tens of thousands of homes bookable on major travel platforms.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2573674</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bcb4d82-b90</externalid>
      <Title>Working Student Backend Engineering (all genders)</Title>
      <Description><![CDATA[<p>You will be working as a Working Student in the Account Compliance &amp; Experience (ACE) team, which is responsible for delivering secure and seamless flows for account lifecycle, relationship, and compliance to customers.</p>
<p>As a Working Student, you will contribute to the development of new backend features across the ACE domain, assist with operational tasks, get hands-on with modern AI-assisted development, and support ongoing tech refactoring efforts.</p>
<p>You will work directly alongside senior engineers, take part in real product development, and gradually build ownership over meaningful parts of our codebase.</p>
<p>The ACE team works within Holidu&#39;s broader backend ecosystem, using Java/Kotlin with Spring Boot, PostgreSQL, Redis, and other data stores, as well as AWS services and Jenkins for CI/CD.</p>
<p>You will have the opportunity to attend team planning sessions, architecture discussions, and retrospectives, giving you a real window into how a senior engineering team operates in a high-growth company.</p>
<p>We offer a fair salary, impact, growth, community, flexibility, and fitness opportunities.</p>
<p>You will be required to work ~20 hours per week, with 1-2 days per week in the office in Munich.</p>
<p>You should be currently enrolled in a degree in Computer Science, Software Engineering, or a related field, have a solid understanding of object-oriented programming and basic software design principles, and some hands-on experience with Java or Kotlin.</p>
<p>You should also have familiarity with RESTful APIs and relational databases (SQL), a genuine curiosity for backend systems, and a product-minded attitude.</p>
<p>Excellent communication skills in English are required, and German is a plus but not required.</p>
<p>Bonus points if you have exposure to Spring Boot, cloud platforms (AWS), or any experience with identity/access management concepts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>working_student</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, PostgreSQL, Redis, AWS services, Jenkins, CI/CD, RESTful APIs, relational databases (SQL), cloud platforms (AWS), identity/access management concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides a host platform for property owners and managers.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2605407</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd7327f8-fcf</externalid>
      <Title>Staff Software Engineer, Full-Stack - Enterprise Gen AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a frontend-focused full-stack engineer to help build AI-powered applications that redefine enterprise workflows and push the boundaries of interactive AI. As a staff software engineer, you&#39;ll work on a mix of cutting-edge customer-facing AI applications and internal SaaS products. Our engineering team powers projects like TIME&#39;s Person of the Year AI experience, where our AI technology helped shape one of the most iconic features in media. You&#39;ll also contribute to Scale&#39;s GenAI Platform (SGP), a powerful system that enables businesses to build and deploy AI agents at scale.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and enhancing user-facing AI applications for major enterprise customers, including high-profile media and Fortune 500 companies</li>
<li>Developing and refining features for Scale&#39;s GenAI Platform, empowering businesses to build, deploy, and manage AI-driven agents</li>
<li>Designing, building, and optimizing polished, high-performance UIs using Next.js, React, TypeScript, and Tailwind</li>
<li>Working closely with product managers, designers, and AI/ML teams to create seamless, intuitive, and impactful user experiences</li>
<li>Integrating frontend applications with backend services, working with APIs, authentication systems, and cloud-based infrastructure</li>
</ul>
<p>In this role, you&#39;ll have the opportunity to shape the future of AI-powered user experiences, working on projects that impact millions of users while developing tools that empower businesses to deploy AI at scale.</p>
<p>The base salary range for this full-time position in our hub locations of San Francisco, New York, or Seattle is $248,400,$310,500 USD. Compensation packages at Scale include base salary, equity, and benefits. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,400—$310,500 USD</Salaryrange>
      <Skills>Next.js, React, TypeScript, Tailwind, AI/ML, APIs, Authentication systems, Cloud-based infrastructure, FastAPI, PostgreSQL, GraphQL, AWS, Azure, GCP, Data-rich web platforms, Interactive AI applications, Agent-based systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4529529005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14499a71-fa9</externalid>
      <Title>Software Engineer, Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
</ul>
<ul>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
</ul>
<ul>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
</ul>
<ul>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
</ul>
<ul>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
</ul>
<ul>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
</ul>
<ul>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
</ul>
<ul>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
</ul>
<ul>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
</ul>
<ul>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
</ul>
<ul>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
</ul>
<ul>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
</ul>
<ul>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4536653005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e355a4a3-c92</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modelling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p><strong>Automation &amp; Tooling</strong></p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p><strong>Operations &amp; Incident Response</strong></p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p><strong>Preferred/Bonus Qualifications</strong></p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid-senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437947</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>230b25df-0f4</externalid>
      <Title>Senior Software Engineer- Database Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our Database Infrastructure team. As a member of this team, you will build and operate large-scale, reliable, and performant data systems using ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</p>
<p>You will collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord. You will exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</p>
<p>You will work with a talented team of engineers who have built one of the largest communication platforms in the world.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and operate large-scale, reliable, and performant data systems with ScyllaDB, PostgreSQL, ElasticSearch, Linux, and Rust.</li>
<li>Collaborate with product and infrastructure teams to develop storage primitives enabling all of Discord.</li>
<li>Exercise &#39;First Principles Thinking&#39; to always deliver what matters most to our users.</li>
<li>Work with a talented team of engineers who have built one of the largest communication platforms in the world.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience with building distributed systems and datastore infrastructure.</li>
<li>Experience with highly-available and distributed databases: e.g. ScyllaDB, Cassandra, BigTable, DynamoDB, CockroachDB, Postgres w/HA, etc.</li>
<li>Proficiency with at least one statically-typed programming language: e.g. Rust, Go, Java, C, C++</li>
<li>Strong operating systems, distributed systems, and concurrency control fundamentals.</li>
<li>Familiarity with Linux internals.</li>
<li>Comfortable working in fast-paced environments.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience with Cassandra or Scylla.</li>
<li>Experience with Rust.</li>
<li>Knowledge of DevOps tools like Salt, Terraform, or Kubernetes.</li>
</ul>
<p>Why Discord?</p>
<p>Discord plays a uniquely important role in the future of gaming. We&#39;re a multi-platform, multi-generational, and multiplayer platform that helps people deepen their friendships around games and shared interests.</p>
<p>We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank.</p>
<p>Join us in our mission!</p>
<p>Your future is just a click away!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>ScyllaDB, PostgreSQL, ElasticSearch, Linux, Rust, Distributed systems, Datastore infrastructure, Highly-available and distributed databases, Operating systems, Concurrency control fundamentals, Linux internals, Cassandra, Go, Java, C, C++, DevOps tools, Salt, Terraform, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including playing video games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8200328002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8482d0fc-285</externalid>
      <Title>Senior Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab reliably by building and maintaining the infrastructure, tooling, and automation behind our deployment options.</p>
<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (Get), and the GitLab Operator to make GitLab easier to deploy, more secure by default, and scalable across major cloud providers and a wide range of customer environments.</p>
<p>In this role, you&#39;ll partner closely with engineering teams and act as a bridge to customer needs, improving installation, upgrade, and day-to-day operations for production-grade GitLab deployments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolving Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support validated reference architectures for enterprise-scale deployments</li>
</ul>
<ul>
<li>Building automation pipelines and observability into deployment tooling to validate, test, and operate GitLab across Kubernetes and other self-managed environments</li>
</ul>
<p>You&#39;ll maintain and evolve the Omnibus GitLab package to support reliable, production-ready self-managed deployments, improving deployment stability, increasing upgrade success rates, and reducing escalation rates.</p>
<p>You&#39;ll develop and improve GitLab Helm Charts so core components integrate cleanly and scale across supported environments, reducing deployment friction, shortening time to deploy, and improving operational consistency at scale.</p>
<p>You&#39;ll enhance the GitLab Environment Toolkit (Get), validated reference architectures, and the GitLab Operator for secure, Kubernetes-native lifecycle management, improving reliability, strengthening security baselines, and accelerating adoption in customer environments.</p>
<p>You&#39;ll improve installation, upgrade, and operational workflows across deployment methods to create a consistent experience for self-managed customers, reducing operational overhead, lowering failure rates, and increasing consistency across deployment methods.</p>
<p>You&#39;ll partner with Security to address vulnerabilities and deliver secure defaults and configurations in the deployment stack, reducing exposure to vulnerabilities and improving baseline security across self-managed deployments.</p>
<p>You&#39;ll build and maintain automation and continuous integration and continuous delivery pipelines that validate and test Omnibus, Charts, Get, and the Operator, increasing release confidence, improving test coverage, and reducing regressions across deployment tooling.</p>
<p>You&#39;ll work closely with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features into deployment methods and keep them reliable, scalable, and aligned with customer needs, improving delivery readiness and reducing operational issues after release.</p>
<p>You&#39;ll guide architectural direction, mentor backend engineers, and contribute to the roadmap for self-managed delivery, improving technical quality, accelerating delivery effectiveness, and strengthening team execution.</p>
<p>You&#39;ll have experience operating backend services in production, including deployment, monitoring, and maintenance in Kubernetes- and Helm-based environments.</p>
<p>You&#39;ll have proficiency in Go for building observable and resilient services, with working knowledge of Ruby as a useful addition.</p>
<p>You&#39;ll have hands-on practice with infrastructure as code, including tools such as Terraform, and with managing infrastructure across cloud providers such as Google Cloud Platform, Amazon Web Services, or Microsoft Azure.</p>
<p>You&#39;ll have knowledge of database design, operations, and troubleshooting, especially for PostgreSQL in secure and scalable setups.</p>
<p>You&#39;ll have knowledge of secure, scalable, and reliable deployment practices, including service scaling and rollout strategies.</p>
<p>You&#39;ll have familiarity with observability tools and patterns such as Prometheus and Grafana to monitor system health and performance.</p>
<p>You&#39;ll have ability to work effectively in large codebases and coordinate across distributed, cross-functional teams using clear written communication.</p>
<p>You&#39;ll have openness to transferable experience from related backend or infrastructure roles, along with the ability to write user-focused documentation and implementation guides.</p>
<p>The Upgrades team is part of GitLab Delivery and focuses on helping self-managed customers run GitLab successfully in their own environments, from smaller deployments to large enterprise footprints.</p>
<p>We own deployment and operational tooling across our work on Omnibus GitLab, Helm Charts, Get, and the GitLab Operator, and we work as a globally distributed, all-remote group that works asynchronously with Site Reliability Engineering, Release, Security, and Development teams across regions.</p>
<p>We are focused on making self-managed GitLab easier to deploy, upgrade, secure, and operate at scale.</p>
<p>For more on how we work, see Team Handbook Page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Ruby, Terraform, Google Cloud Platform, Amazon Web Services, Microsoft Azure, PostgreSQL, Prometheus, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463933002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0540dd96-198</externalid>
      <Title>Senior Software Engineer - Query Engine, Database Internals - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join the Elasticsearch - Analytical Engine team. This globally-distributed, completely remote team of senior engineers is responsible for building new analytics capabilities in Elasticsearch&#39;s latest aggregation framework based on a completely new compute engine, and accessed via our new piped query language called ES|QL.</p>
<p>This is a senior software engineering role that covers the design and implementation of new features, enhancements to existing features, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable, and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>You&#39;ll be a full-time Elasticsearch contributor, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. You are able to research what available data structures and algorithms work best to implement a new functionality or enhancement. Sometimes you&#39;ll need to implement a data structure or algorithm in the code base. And there will be times when you&#39;ll need to get close to the operating system and hardware.</li>
<li>You&#39;ll work with a globally distributed team of experienced engineers focused on the search and query (ES|QL) analytics capabilities of Elasticsearch. You&#39;ll get to work with the teams that build the UI to ensure a good user experience, and you&#39;ll get to work with the teams building solutions on top of these APIs</li>
<li>You&#39;ll be an expert in several areas of Elasticsearch, and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</li>
<li>You&#39;ll work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts, and sometimes handling them yourself.</li>
<li>You&#39;ll write idiomatic modern Java -- Elasticsearch is 99.8% Java!</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas.</li>
<li>You have experience with software systems engineering</li>
<li>You have a strong desire to optimize and make use of the most efficient data structures and algorithms.</li>
<li>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</li>
<li>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code, approaches, and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</li>
<li>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</li>
<li>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve built things with Elasticsearch before.</li>
<li>You’ve worked in the search and information retrieval space. You’re familiar with the data structures and algorithms associated with information retrieval.</li>
<li>You’ve worked on data storage technology or have experience building data analytics capabilities.</li>
<li>You have experience designing, leading and owning cross-functional initiatives.</li>
<li>You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component.</p>
<p>The typical starting salary range for new hires in this role is listed below. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>
<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being. The typical starting salary range for this role is:$133,100-$210,600 USDThe typical starting salary range for this role in the select locations listed above is:$159,900-$252,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$133,100-$210,600 USD</Salaryrange>
      <Skills>core Java, standard library of data structures and concurrency constructs, newer features like lambdas, software systems engineering, data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7723819</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07626e74-020</externalid>
      <Title>Engineering Architect, Identity (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Auth0 secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Software Architect, Identity</strong></p>
<p><strong>The Engineering Architect Team</strong></p>
<p>The Architecture team is a small group of very senior engineers reporting to our VP of Engineering Excellence, working broadly across the organisation in collaboration with Engineering, Product, and Security. We partner deeply with other Engineering teams for large projects, and provide direction and architectural guidance for smaller initiatives. We have a dual-pronged charter to “level up the tech stack and level up the people stack” via both technical contributions and partnerships/mentoring.</p>
<p>In this role, you will have the opportunity to significantly contribute to Auth0’s future technology direction. Through your experience, knowledge of industry trends, and technical abilities you will provide guidance, build proof of concepts, and deliver production software implementations that help Auth0 Engineering teams move faster by using and developing standard patterns and technologies. You will also help advance the engineering culture and help uplevel other engineers. Note that while this role involves a lot of guidance, documentation, and leadership, it also requires substantial hands-on coding and development of both applications and systems.</p>
<p><strong>What you’ll be doing</strong></p>
<ul>
<li>Collaborate with Product, Security, and Engineering teams to define and continually improve Auth0’s technology stack and architecture.</li>
</ul>
<ul>
<li>Foster and lead innovation in the IAM space, with a strong focus on Agentic Identity</li>
</ul>
<ul>
<li>Lead initiatives to enhance, scale, and evolve Auth0’s product offerings.</li>
</ul>
<ul>
<li>Embed within Engineering teams across the organisation for large projects, while providing guidance and lighter touch engagements for smaller initiatives.</li>
</ul>
<ul>
<li>Design, architect, and document large scale distributed systems.</li>
</ul>
<ul>
<li>Lead the development of complex, broadly-scoped functionality in a very large and deep set of services and components.</li>
</ul>
<ul>
<li>Teach by doing: coding, optimising, and troubleshooting Node.js and Go applications in collaboration with feature development teams.</li>
</ul>
<ul>
<li>Implement features and create consistent foundations using technologies such as AWS, Azure, Node.js, Go, MongoDB, Redis, PostgreSQL, Kubernetes.</li>
</ul>
<ul>
<li>Investigate, understand, and resolve bottlenecks in our ability to scale, use resources efficiently, and maintain a 99.99% uptime SLA.</li>
</ul>
<ul>
<li>Drive technical decision making while striving to find the right balance between factors such as simplicity, flexibility, reliability, cost, and performance.</li>
</ul>
<ul>
<li>Participate in “round table” discussions and mentor team members and engineers throughout the organisation to level up our people.</li>
</ul>
<ul>
<li>Participate in our Engineering Leadership Team with other architects, directors, and executives.</li>
</ul>
<ul>
<li>You will join our Incident Commander on-call rotation. Members of our team do periodic on-call rotation for high-severity incidents to help up-level our responses After spending time getting acquainted with our applications, systems, and processes, and getting training to</li>
</ul>
<p><strong>What you’ll bring to the role</strong></p>
<ul>
<li>10+ years of software development experience.</li>
</ul>
<ul>
<li>5+ years of experience working on cloud applications.</li>
</ul>
<ul>
<li>Experience with API-first applications using REST and/or gRPC</li>
</ul>
<ul>
<li>Passion and thorough understanding of what it takes to build and operate secure, reliable systems at scale.</li>
</ul>
<ul>
<li>Knowledge of Identity Protocols such as OAuth, OIDC and SAML.</li>
</ul>
<ul>
<li>Industry knowledge of the Authorization and Authentication spaces.</li>
</ul>
<ul>
<li>Experience in building AI Agents, and/or MCP servers applications.</li>
</ul>
<ul>
<li>Experience with security engineering and application security.</li>
</ul>
<ul>
<li>Very strong written and verbal communication skills with a demonstrated ability to adjust your communication style to the intended audience, whether communicating with senior executives, customers, engineers, or product managers.</li>
</ul>
<ul>
<li>Mastery and deep understanding of hands-on software development building distributed systems.</li>
</ul>
<ul>
<li>Experience with multi-cloud environments and container deployments, particularly Kubernetes in AWS/Azure.</li>
</ul>
<ul>
<li>Prior experience with application performance management, tracing, and performance testing tools.</li>
</ul>
<ul>
<li>Excellence at creating clarity and alignment for technical initiatives.</li>
</ul>
<ul>
<li>Great ability to build trust through collaboration with multiple teams and get consensus on a vision.</li>
</ul>
<ul>
<li>Knowledge of application security and cloud security best practices.</li>
</ul>
<p>And extra credit if you have experience in any of the following!</p>
<ul>
<li>Deep experience in Node.js (Javascript or Typescript), or Golang.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$274,000-$370,000 USD</Salaryrange>
      <Skills>API-first applications, REST, gRPC, OAuth, OIDC, SAML, Authorization, Authentication, AI Agents, MCP servers, Security engineering, Application security, Cloud security best practices, Node.js, Go, AWS, Azure, MongoDB, Redis, PostgreSQL, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7128746</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>16599c27-a87</externalid>
      <Title>Senior Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>
<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>
<li>Flexible PTO to take the time you need, when you need it</li>
<li>Paid parental leave for all new parents welcoming a new child</li>
<li>Retirement savings plan to help you plan for the future</li>
<li>Remote work setup budget to help you create a productive home office</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced</li>
<li>In-office meal program and commuter benefits provided for onsite employees</li>
</ul>
<p>Compensation at Cresta:</p>
<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>
<p>OTE Range: $205,000–$270,000 + Offers Equity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$205,000–$270,000</Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center using AI and human intelligence.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5137153008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe235887-6b4</externalid>
      <Title>Senior Fullstack Product Software Engineer, DocSend</Title>
      <Description><![CDATA[<p>As a Senior Full-Stack Product Engineer on the Dropbox DocSend team, you will play a pivotal role in shaping the future of secure document management, sharing, and tracking.</p>
<p>Your responsibilities will revolve around developing and enhancing our product to deliver exceptional user experiences , working closely with cross-functional teams to turn innovative ideas into robust, scalable, and user-friendly features. You will also have the opportunity to drive high impact and have high ownership in a smaller, startup-like team.</p>
<p>We are focused on expanding our Virtual Data Room business by improving deal workflows and introducing AI-enabled features.</p>
<p>You will autonomously lead full-stack projects, making effective tradeoffs between technical requirements and business goals. You will act as a leader across the org with impact extending beyond the immediate team, driving cross-team initiatives and collaborating effectively with cross-functional teams, including product managers, designers, and other engineers.</p>
<p>You will set a high bar for quality and operational excellence, preemptively identifying and resolving technical risks, and championing best practices across the team through code and design reviews.</p>
<p>You will mentor teammates, providing actionable feedback to help teammates grow into the next level. You will participate in on-call rotations, which entails being available for calls during both core and non-core business hours, and debug customer issues using logs, metrics, and traces.</p>
<p>The ideal candidate will have 9+ years of experience in software engineering or related industry roles, a BS degree in Computer Science or related technical field involving coding, and demonstrated expertise in Ruby on Rails applications and React.</p>
<p>Preferred qualifications include familiarity with tools and languages used on the DocSend Engineering team, such as Typescript, GraphQL, HAML, and PostgreSQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,200-$274,300 USD</Salaryrange>
      <Skills>Ruby on Rails, React, Typescript, GraphQL, HAML, PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file-sharing services. It has a double-digit growth rate year over year.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7641558</Applyto>
      <Location>Remote - US: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1903386-87b</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>
<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>
<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>
<p>Automate operations and engineering.</p>
<p>Focus on automation so we can spend energy where it matters.</p>
<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>
<p>Deep proficiency with coding languages such as Golang or Python.</p>
<p>Deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>
<p>Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>
<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>
<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>
<p>Production experience with database software such as PostgreSQL.</p>
<p>Experience with GitOps tooling such as Flux or Argo.</p>
<p>Experience with CI/CD such as GitHub Actions.</p>
<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>
<p>Compensation includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower team members to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4535898008</Applyto>
      <Location>Germany (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26212e9e-5a8</externalid>
      <Title>Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>
<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>
<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>
<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>
<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>
<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>
<li>Deep proficiency with coding languages such as Golang or Python.</li>
<li>Deep familiarity with container-related security best practices.</li>
<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>
<li>Experience with GPU-enabled clusters is a bonus.</li>
<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>
<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>
<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>
<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>
<li>Production experience with database software such as PostgreSQL.</li>
<li>Experience with GitOps tooling such as Flux or Argo.</li>
<li>Experience with CI/CD such as GitHub Actions.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>
<li>Flexible vacation time to promote a healthy work-life blend.</li>
<li>Paid parental leave to support you and your family.</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5113847008</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d4ebd626-2bf</externalid>
      <Title>Staff+ Software Engineer, Databases</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>
<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>
<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the technical direction for database solutions used across Product and Research</li>
</ul>
<ul>
<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>
</ul>
<ul>
<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>
</ul>
<ul>
<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>
</ul>
<ul>
<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>
</ul>
<ul>
<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>
</ul>
<ul>
<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>
</ul>
<ul>
<li>Make critical build vs. buy decisions for database technologies</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>
</ul>
<ul>
<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>
</ul>
<ul>
<li>Have successfully scaled databases through massive growth at high-growth companies</li>
</ul>
<ul>
<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>
</ul>
<ul>
<li>Excel at technical leadership and cross-functional collaboration</li>
</ul>
<ul>
<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>
</ul>
<ul>
<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>
</ul>
<ul>
<li>Experience building multi-cloud or hybrid cloud database solutions</li>
</ul>
<ul>
<li>Knowledge of database orchestration and automation at scale</li>
</ul>
<ul>
<li>Background at companies known for database excellence</li>
</ul>
<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>database architecture, OLTP systems, distributed database systems, database scaling, database performance optimization, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, vector databases, async job processing frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151069008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>564289ba-f9b</externalid>
      <Title>Senior Fullstack Product Software Engineer, DocSend</Title>
      <Description><![CDATA[<p>As a Senior Full-Stack Product Engineer on the Dropbox DocSend team, you will play a pivotal role in shaping the future of secure document management, sharing, and tracking.</p>
<p>Your responsibilities will revolve around developing and enhancing our product to deliver exceptional user experiences , working closely with cross-functional teams to turn innovative ideas into robust, scalable, and user-friendly features. You will also have the opportunity to drive high impact and have high ownership in a smaller, startup-like team.</p>
<p>We are focused on expanding our Virtual Data Room business by improving deal workflows and introducing AI-enabled features.</p>
<p>Key responsibilities include:</p>
<p>Autonomously leading full-stack projects, making effective tradeoffs between technical requirements and business goals. Acting as a leader across the org with impact extending beyond the immediate team, driving cross-team initiatives and collaborating effectively with cross-functional teams, including product managers, designers, and other engineers. Setting a high bar for quality and operational excellence, preemptively identifying and resolving technical risks, and championing best practices across the team through code and design reviews. Mentoring teammates, providing actionable feedback to help teammates grow into the next level. Participating in on-call rotations, which entails being available for calls during both core and non-core business hours, and debugging customer issues using logs, metrics, and traces.</p>
<p>Requirements include:</p>
<p>9+ years of experience in software engineering or related industry roles. BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience. Demonstrated expertise in Ruby on Rails applications and React. Demonstrated success in developing and deploying large-scale web applications with a user-focused approach. Proven ability to thrive in agile, fast-paced environments, including comfort with continuous deployment practices and rapid iteration.</p>
<p>Preferred qualifications include familiarity with tools and languages used on the DocSend Engineering team, including Typescript, GraphQL, HAML, and PostgreSQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,400-$257,600 CAD</Salaryrange>
      <Skills>Ruby on Rails, React, Typescript, GraphQL, HAML, PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox&apos;s fastest-growing business, with a double-digit growth rate year over year.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7641561</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5f2dbbff-10c</externalid>
      <Title>Principal Software Engineer - Search Relevance - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search team. This globally-distributed team of expert engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>This is a principal software engineering role that focuses on enhancing the vector and keyword search functionality within Elasticsearch, covering the design and implementation of new search features, enhancements to existing search functionality, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p><strong>What You Will Be Doing</strong></p>
<ul>
<li>Lead initiatives within Elasticsearch to produce an industry-leading search engine offering, supplying unparalleled speed and relevance in search.</li>
<li>Contribute to Elasticsearch full time, building new search features and fixing intriguing bugs, all while making the code easier to understand. Sometimes you&#39;ll need to invent a new algorithm or data structure. Or find one and implement it. Sometimes you&#39;ll need to get close to the operating system and hardware.</li>
<li>Work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch.</li>
<li>Be an expert on Elasticsearch search relevance. You&#39;ll identify and drive improvements in this area based on your questions and your instincts.</li>
<li>Work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself.</li>
<li>Write idiomatic modern Java -- Elasticsearch is 99.8% Java!</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Professional experience with search and vector databases, and you used HNSW, IVF, or other relevant algorithms and libraries on search platforms at scale.</li>
<li>You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as other features like lambdas.</li>
<li>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</li>
<li>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code and approaches and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</li>
<li>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</li>
<li>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve built things with Elasticsearch before.</li>
<li>You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration.</li>
<li>You have experience designing, leading and owning cross-functional initiatives</li>
</ul>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is listed below.</p>
<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>
<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched Registered Retirement Savings Plan (RRSP) with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p>The typical starting salary range for this role is: $154,000-$243,600 CAD</p>
<p><strong>Additional Information - We Take Care of Our People</strong></p>
<p>As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
<li>Health coverage for you and your family in many locations</li>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
<li>Generous number of vacation days each year</li>
<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>
<li>Up to 40 hours each year to use toward volunteer projects you love</li>
<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,000-$243,600 CAD</Salaryrange>
      <Skills>Java, Search and vector databases, HNSW, IVF, Lucene, Elasticsearch, Solr, PostgreSQL, MongoDB, Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Its platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7699668</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a3f562b-768</externalid>
      <Title>Senior Staff Software Engineer, API</Title>
      <Description><![CDATA[<p>About Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.\n\nAbout the role\n\nAnthropic is seeking an exceptional Senior Staff Software Engineer to join the Claude Developer Platform team and serve as the senior-most individual contributor across API Engineering. Since launch, the Claude API has seen rapid growth and adoption by companies of all sizes to build AI applications with our industry-leading models. The API serves as the primary channel for safely and broadly distributing AI&#39;s benefits across all sectors of the economy.\n\nThis role sets the technical direction for the systems that make Claude accessible to developers, enterprises, and partners at scale. You will operate at the intersection of technical strategy and execution, partnering closely with Research, Inference, Platform, Infrastructure, and Safeguards to ensure the Claude API is reliable, capable, and positioned to grow with Anthropic&#39;s ambitions.\n\nResponsibilities\n\n- Define and drive multi-year technical strategy for the Claude API, setting direction across API Core, Capabilities, Knowledge, Distributability, and Agents.\n\n- Identify and personally lead the highest-complexity, highest-impact engineering initiatives spanning multiple teams.\n\n- Serve as the primary technical decision-maker for major architectural decisions with org-wide scope.\n\n- Partner with Research to evaluate and integrate frontier capabilities; work with Inference and Platform for reliable delivery at scale; collaborate with Infrastructure and Safeguards for reliability, security, and responsible deployment.\n\n- Mentor and develop Staff-level engineers across the org.\n\n- Drive alignment across Product, GTM, Safety, and beyond while proactively identifying and addressing systemic technical risks.\n\nYou may be a good fit if you:\n\n- Have 12+ years of engineering experience with a clear track record operating at Staff or Senior Staff level.\n\n- Have demonstrably shaped technical strategy for large-scale API or distributed systems platforms.\n\n- Drive the highest-leverage technical outcomes without formal authority,you lead through influence, quality of thinking, and trust.\n\n- Have deep expertise in distributed systems and API architecture, and are effective writing design docs, making architectural calls, and coding in critical paths.\n\n- Are highly effective across org boundaries,you build trust with Research, Inference, Infrastructure, Safeguards, and business stakeholders alike.\n\n- Bring strong product instincts and a craftsperson&#39;s approach to API design; you communicate clearly with both technical and non-technical audiences.\n\nTechnical Stack\n\n- Languages: Python, TypeScript\n\n- Frameworks: FastAPI, React\n\n- Infrastructure: GCP, Kubernetes, Cloud Run, AWS, Azure\n\n- Databases: PostgreSQL (AlloyDB), Vector Stores, Firestore\n\n- Tools: Feature Flagging, Prometheus, Grafana, Datadog\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nLocation Preference: Preference will be given to candidates based in New York or the San Francisco Bay Area as these positions are part of an SF- or NY-based team.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $405,000-$485,000 USD\n\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, TypeScript, FastAPI, React, GCP, Kubernetes, Cloud Run, AWS, Azure, PostgreSQL, Vector Stores, Firestore, Feature Flagging, Prometheus, Grafana, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems. It is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5134895008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9503a764-3c3</externalid>
      <Title>Staff Backend (Python) Engineer, AI Engineering:Duo Chat</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (Python) on the Duo Chat team in AI Engineering, you&#39;ll lead the backend architecture that powers GitLab Duo Chat across the GitLab DevSecOps platform.</p>
<p>You&#39;ll solve hard problems in building reliable, secure, and scalable AI-powered chat workflows so customers can plan, write, review, and secure code faster, with confidence.</p>
<p>This is a hands-on technical leadership role where you&#39;ll set direction for how we integrate and evolve large language model providers (including Google Vertex AI) across Ruby on Rails and Python services, raise the bar on observability and testing, and guide the team through ambiguous, high-impact technical decisions.</p>
<p>Over your first year, you&#39;ll be expected to drive key architectural choices, reduce technical debt that slows iteration, and help the team ship durable improvements to response quality, reliability, and maintainability.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Integrate new generative AI models and providers into GitLab Duo Chat to expand capabilities and improve response quality</li>
</ul>
<ul>
<li>Improve debugging, observability, and test coverage for AI-powered chat workflows to increase reliability at scale</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define the technical architecture and technical roadmap for the Duo Chat group, aligning backend execution with product direction and engineering priorities</li>
</ul>
<ul>
<li>Solve the highest-scope and most ambiguous backend problems, delivering secure, well-tested, performant solutions with minimal guidance</li>
</ul>
<ul>
<li>Integrate and extend generative AI capabilities in GitLab Duo Chat, including large language models (LLMs) and providers such as Google Vertex AI</li>
</ul>
<ul>
<li>Develop, ship, and maintain backend features across Python and Ruby on Rails services that power Duo Chat experiences across the GitLab platform</li>
</ul>
<ul>
<li>Design, implement, and review GraphQL application programming interface (API) contracts and supporting backend logic to ensure reliability, scalability, and clear frontend integrations</li>
</ul>
<ul>
<li>Improve observability, debugging workflows, and incident readiness by strengthening logging, tracing, and production troubleshooting practices</li>
</ul>
<ul>
<li>Drive code quality and long-term maintainability by setting internal standards, leading code reviews, and identifying and reducing technical debt</li>
</ul>
<ul>
<li>Mentor engineers across the team and participate in Tier 2 on-call rotations, contributing to root cause analysis and follow-up improvements to resiliency and testing (including RSpec)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Production experience building and operating backend services in Python, including background jobs, APIs, and data models</li>
</ul>
<ul>
<li>Ability to define and evolve technical architecture by weighing trade-offs, selecting patterns and tools, and setting a clear technical direction for others to follow</li>
</ul>
<ul>
<li>Experience setting and driving a technical roadmap in partnership with product and engineering stakeholders</li>
</ul>
<ul>
<li>Proficiency designing and maintaining REST and/or GraphQL APIs with attention to scalability, maintainability, and backward compatibility</li>
</ul>
<ul>
<li>Hands-on experience integrating large language models into applications, including prompt design and building features powered by generative AI</li>
</ul>
<ul>
<li>Strong SQL skills and experience working with relational databases such as PostgreSQL, including efficient queries and data modeling</li>
</ul>
<ul>
<li>Experience mentoring engineers through code review, architectural guidance, and shared standards, and communicating complex technical decisions in a clear, async-first way</li>
</ul>
<ul>
<li>Comfort contributing in a mature codebase across Python and Ruby on Rails, with openness to learning and applying transferable skills from related technologies or domains</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Duo Chat team sits within GitLab&#39;s AI Engineering organization and is responsible for building and evolving GitLab Duo Chat, the AI-powered chat experience embedded across the GitLab DevSecOps platform.</p>
<p>You&#39;ll work with a small, cross-functional group of backend, frontend, and AI specialists who collaborate asynchronously across time zones, using GitLab issues, merge requests, and documentation as the primary way of working.</p>
<p>The team focuses on integrating and scaling generative AI capabilities (including providers like Google Vertex AI), improving reliability and performance, and strengthening debugging, observability, and testing workflows so customers can safely use AI to plan, write, review, and secure their code across GitLab.</p>
<p><strong>How GitLab Supports Full-Time Employees</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Backend engineering, API design, GraphQL, Ruby on Rails, PostgreSQL, SQL, Large language models, Generative AI, Prompt design, Code review, Architectural guidance, Async-first communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8450446002</Applyto>
      <Location>Remote, Americas; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4920db00-eb9</externalid>
      <Title>Senior Backend Engineer (RoR), SSCS: Authorization</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the Authorization team at GitLab, you&#39;ll build and evolve the core systems that decide who can access what across the entire GitLab platform, directly impacting millions of users from startups to large enterprises.</p>
<p>You&#39;ll architect and implement our next-generation authorization infrastructure, including policy-as-code approaches, fine-grained permissions, and performance optimizations at massive scale, enabling GitLab&#39;s move toward zero-trust architecture while keeping authorization fast, secure, and correct.</p>
<p>You&#39;ll work closely with Security, Database, Platform, and authentication-focused teams to design and ship authorization capabilities that span GitLab&#39;s various deployment models and multi-tenant environments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Implementing fine-grained permissions for Job Tokens, Personal Access Tokens, and the GitLab Duo agent platform</li>
</ul>
<ul>
<li>Collaborating on Auth stack initiatives that evolve how authorization works across GitLab</li>
</ul>
<ul>
<li>Implement fine-grained permission systems for Job Tokens, Personal Access Tokens, the GitLab Duo Agent Platform, and other authentication mechanisms across the GitLab platform.</li>
</ul>
<ul>
<li>Collaborate with Security, Authentication, Database, and Platform teams on authorization stack initiatives, aligning designs and implementation plans.</li>
</ul>
<ul>
<li>Solve complex performance challenges in authorization, including query optimization, caching strategies, and database decomposition, with a focus on PostgreSQL.</li>
</ul>
<ul>
<li>Design and evolve authorization systems that work across multiple deployment models and multi-tenant architectures while maintaining security and reliability.</li>
</ul>
<ul>
<li>Drive improvements to authorization security, maintainability, and developer experience through code review, documentation, and technical leadership.</li>
</ul>
<ul>
<li>Contribute to architectural decisions for authorization features with a long-term strategic view, balancing immediate needs with future scalability.</li>
</ul>
<ul>
<li>Mentor and support other engineers in authorization patterns, policy-based access control, and secure coding practices in a fully remote, asynchronous environment.</li>
</ul>
<ul>
<li>Professional experience building and maintaining production applications with Ruby on Rails or similar backend frameworks.</li>
</ul>
<ul>
<li>Strong understanding of authorization models, including role-based access control, attribute-based access control, and fine-grained permission patterns.</li>
</ul>
<ul>
<li>Experience designing and optimizing high-scale backend systems, including PostgreSQL performance tuning, query optimization, and effective caching strategies.</li>
</ul>
<ul>
<li>Familiarity with or interest in policy-based authorization systems and modern policy languages such as Cedar or Rego.</li>
</ul>
<ul>
<li>Understanding of core security principles, including threat modeling, least-privilege access, and zero-trust architectures.</li>
</ul>
<ul>
<li>Experience working with distributed systems and service-to-service communication in a cloud or multi-tenant environment.</li>
</ul>
<ul>
<li>Demonstrated ability to own complex technical initiatives from design through production deployment in an asynchronous, remote setting.</li>
</ul>
<ul>
<li>Strong collaboration and communication skills, with openness to learning and applying transferable skills from adjacent domains or technologies.</li>
</ul>
<p>We on the Authorization team at GitLab design, build, and maintain the permission systems that control access across the GitLab platform, ensuring they are secure, scalable, and flexible for customers of all sizes.</p>
<p>We lead the ongoing evolution of our authorization architecture, with a focus on modern policy-as-code approaches, fine-grained access control, and support for initiatives like the evolving Auth stack.</p>
<p>We collaborate asynchronously across time zones and partner closely with Authentication, Product Security, Database, and Security teams to align on identity, data modeling, and threat modeling needs while iterating safely on core platform capabilities.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, Authorization models, Policy-based access control, Fine-grained permission patterns, Distributed systems, Service-to-service communication, Cloud or multi-tenant environment, Cedar or Rego policy languages, PostgreSQL performance tuning, Query optimization, Effective caching strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps that enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8457315002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09e766cb-2a4</externalid>
      <Title>Software Engineer, Enterprise Integrations</Title>
      <Description><![CDATA[<p>Aboutfrica</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request.</p>
<p>Available Locations: Austin Texas</p>
<p>About the Department</p>
<p>Cloudflare&#39;s Enterprise Integrations Engineering Team designs, builds, and maintains integrations across a wide range of SaaS applications used throughout the organization. Our mission is to create scalable, reliable, and maintainable systems that ensure data flows securely and efficiently between platforms.</p>
<p>What You&#39;ll Do</p>
<p>We&#39;re looking for a software engineer to join our Enterprise Integrations Team. You&#39;ll work on building and maintaining integration workflows between Cloudflare and a variety of SaaS applications. This includes taking work from concept through implementation, including gathering requirements, writing technical specifications, development, testing, and deployment. You&#39;ll collaborate closely with internal teams to ensure integrations meet business needs and are built following engineering best practices. As you grow in the role, you&#39;ll have the opportunity to lead larger initiatives and own projects from end to end.</p>
<p>Qualifications &amp; Skills Required:</p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field, or equivalent work experience</li>
<li>Minimum of 5 years of professional experience as a software engineer</li>
<li>Experience working with internal stakeholders to solve business problems through integration solutions</li>
<li>Proficiency in Golang</li>
<li>Experience building RESTful APIs with proper service security practices</li>
<li>Experience working with observability tools such as Grafana, Prometheus, Sentry, or Kibana</li>
<li>Experience with Kubernetes</li>
<li>Experience with GitLab or other CI/CD tools</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience working with ERP systems such as Oracle or NetSuite</li>
<li>Experience working in an Agile Scrum environment</li>
<li>Familiarity with tools like Jira and Confluence</li>
<li>Familiarity with integration patterns such as pub/sub, CDM (Common Data Model), and batch processing</li>
<li>Experience working with PostgreSQL</li>
<li>Experience with Cloudflare Developer’s Platform</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, RESTful APIs, Observability tools, Kubernetes, GitLab, ERP systems, Agile Scrum, Jira, Confluence, Integration patterns, PostgreSQL, Cloudflare Developer’s Platform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7336735</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ab209e80-6b1</externalid>
      <Title>Senior Full Stack Product Software Engineer</Title>
      <Description><![CDATA[<p>As a Senior Full Stack Software Engineer at Dropbox, you will help design and develop the seamless, scalable, and user-friendly experiences Dropbox users depend on.</p>
<p>You will take ownership of key product areas, delivering end-to-end solutions that combine front-end user interfaces with robust back-end systems.</p>
<p>This year, Dropbox is on a mission to expedite the creation and implementation of AI-enabled products, providing a comprehensive technology stack for rapid prototyping and reliable deployment of AI-augmented functionality.</p>
<p>Responsibilities:</p>
<ul>
<li>Manage projects end-to-end: Lead initiatives from data discovery through design, implementation, and deployment.</li>
</ul>
<ul>
<li>Develop customer-centric prototypes: Create prototypes for new product explorations, focusing on user needs and feedback.</li>
</ul>
<ul>
<li>Proactively communicate: Share insights, progress, and outcomes with your team and leadership regularly.</li>
</ul>
<ul>
<li>Collaborate across teams: Foster strong relationships with other engineering teams and collaborate effectively with cross-functional partners within Dropbox.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of professional experience in full-stack development</li>
</ul>
<ul>
<li>BS degree or higher in Computer Science, a related field, or equivalent experience</li>
</ul>
<ul>
<li>Strong experience designing, developing, and scaling web applications</li>
</ul>
<ul>
<li>Expertise in front-end (JavaScript, React, Angular, HTML/CSS) and back-end (Node.js, Python) development</li>
</ul>
<ul>
<li>Familiarity with databases such as MySQL, PostgreSQL, or MongoDB</li>
</ul>
<p>Compensation:</p>
<p>Canada Pay Range $190,400-$257,600 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,400-$257,600 CAD</Salaryrange>
      <Skills>full-stack development, JavaScript, React, Angular, HTML/CSS, Node.js, Python, MySQL, PostgreSQL, MongoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file sharing services.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7536345</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86696218-8f0</externalid>
      <Title>Staff Backend Engineer (Ruby on Rails/AI), Verify</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a central role in how we integrate AI into CI/CD workflows. Your work will impact performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>
<p>In this role, you&#39;ll go beyond using AI tools and help define how we design, build, and iterate on AI-assisted and agentic CI experiences. You&#39;ll set standards for what good looks like across our AI agent portfolio, including how we measure success, how we instrument behavior in production, and how we account for large language model limitations. You&#39;ll also help responsibly integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>
<p>We have ambitious goals for Agentic CI in FY27. As a Staff Engineer, you will:</p>
<ul>
<li>Partner with Engineering, Product, and UX leadership to pressure-test our priorities: where we can move faster, where we&#39;re missing data, and where there&#39;s whitespace to innovate. Part of this includes learning and growing with the Engineering team you will collaborate closely with.</li>
</ul>
<ul>
<li>Define what success looks like across our agent portfolio and make sure we&#39;re tracking against it , not just shipping, but learning.</li>
</ul>
<ul>
<li>Bring a sharp eye to the competitive landscape, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>
</ul>
<p>Examples of Agentic CI work we have planned for the upcoming year:</p>
<ul>
<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>
</ul>
<ul>
<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>
</ul>
<ul>
<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what&#39;s working, catch what isn&#39;t, and iterate with confidence.</li>
</ul>
<ul>
<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>
</ul>
<p>You&#39;ll shape and scale GitLab CI backend infrastructure to improve performance, reliability, and usability for users running jobs at high volume. You&#39;ll design and implement AI-powered features for Agentic CI, including agents, agentic flows, and LLM-backed tooling that integrates with GitLab&#39;s Duo Agent Platform. You&#39;ll define what success looks like for AI in CI before you build, including baselines, measurable outcomes, and clear signals that help the team learn and iterate. You&#39;ll build the instrumentation and observability needed to make AI-assisted CI trustworthy in production, including feature behavior metrics, dashboards, and safeguards. You&#39;ll own and drive measurable performance improvements across CI systems (for example, database access patterns, background processing, and job orchestration) by forming hypotheses, running experiments, and validating results with data. You&#39;ll write secure, well-tested, maintainable Ruby on Rails code in a large monolith, improving existing features while reducing technical debt and operational risk. You&#39;ll lead cross-functional technical work with Product, UX, and Infrastructure, influencing architecture and execution across the Verify stage. You&#39;ll share standards, patterns, and learnings with other engineers, raising the bar for responsible AI integration and evidence-driven engineering across CI.</p>
<p>This role requires advanced proficiency with Ruby and Ruby on Rails, with experience building and maintaining reliable backend services in a large codebase. You should have strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation. You should have hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains. You should have practical experience designing and shipping AI-powered backend features and integrations, including sound judgment about large language model limitations and responsible use in production. You should have a data-driven approach to engineering: defining hypotheses, establishing baseline metrics, instrumenting changes, and measuring outcomes against clear success criteria. You should have familiarity with observability patterns and tools (metrics, logging, tracing) to diagnose issues, improve reliability, and guide iteration. You should have strong backend architecture and delivery practices, including secure design, well-tested code, and strategies for safe rollouts and zero-downtime changes. You should have clear written and verbal communication skills, including writing technical proposals and documentation, and collaborating effectively in a remote, asynchronous, cross-functional environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Ruby on Rails, PostgreSQL, Data modeling, Query tuning, Scaling large tables, High-traffic production systems, CI, Workflow orchestration, Infrastructure-heavy domains, AI-powered backend features, Large language model limitations, Responsible use in production, Data-driven approach to engineering, Observability patterns, Metrics, Logging, Tracing, Backend architecture, Delivery practices, Secure design, Well-tested code, Safe rollouts, Zero-downtime changes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8448283002</Applyto>
      <Location>Remote, APAC; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US; Remote, US-Southeast</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a027f462-69a</externalid>
      <Title>Senior Software Developer - Storage Engine - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Developer to join the team that contributes to improving our storage efficiency for metrics, logs, and other types of data. As a software engineer in the team, you will work on different initiatives, such as enhancing current logging solutions to ensure that logging data is always accepted and persisted, advancing our current metrics processing capabilities to ensure massive and seamless adoption by our customers, and improving storage efficiency across the board. You&#39;ll also be extending the logic for efficiently querying and aggregating the stored data, taking their storage layout and ordering into account.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable, intuitive, and high-quality software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p>As a Senior Software Developer, you will:</p>
<ul>
<li>Work with a globally distributed team of experienced engineers focused on data storage mechanisms and query capabilities of Elasticsearch.</li>
<li>Be an expert in the storage engine area, and everyone will turn to you when they have a question about it. You&#39;ll improve those areas based on your questions and your instincts.</li>
<li>Be a full-time Elasticsearch contributor, building data-intensive new features, fixing intriguing bugs, and increasing the testing coverage, all while making the code easier to understand.</li>
<li>Design and implement advanced algorithms and data structures, often working at the system and hardware level. You’ll also engage with our global community for triaging and resolving issues and pull requests.</li>
</ul>
<p>We&#39;re looking for someone with strong core Java skills and an excellent understanding of concurrent and parallel programming principles. You should have an excellent background in applied data processing (data structures, algorithms) and be familiar with storage systems and low-level abstractions in OS. You should also be able to work with a high level of autonomy and be able to take on projects and guide them from beginning to end.</p>
<p>This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $128,300-$203,000 CAD. This role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched Registered Retirement Savings Plan (RRSP) with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$128,300-$203,000 CAD</Salaryrange>
      <Skills>Java, Concurrent and parallel programming principles, Data structures and algorithms, Storage systems and low-level abstractions in OS, Elasticsearch, Solr, PostgreSQL, MongoDB, Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7592630</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b054d891-685</externalid>
      <Title>Staff+ Software Engineer, Databases</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>
<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>
<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the technical direction for database solutions used across Product and Research</li>
<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>
<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>
<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>
<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>
<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>
<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>
<li>Make critical build vs. buy decisions for database technologies</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>
<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>
<li>Have successfully scaled databases through massive growth at high-growth companies</li>
<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>
<li>Excel at technical leadership and cross-functional collaboration</li>
<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>
<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>
<li>Experience building multi-cloud or hybrid cloud database solutions</li>
<li>Knowledge of database orchestration and automation at scale</li>
<li>Background at companies known for database excellence</li>
</ul>
<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>
<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Database architecture, Distributed database systems, OLTP systems, Database scaling, Database performance optimization, Database reliability, Database cost efficiency, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, Vector databases, Async job processing frameworks, Multi-cloud database solutions, Hybrid cloud database solutions, Database orchestration, Database automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151069008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>759f1d00-447</externalid>
      <Title>Software Engineer, Workers Builds &amp; Automation</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a member of the Workers team, you will collaborate with Engineers, Designers, and Product Managers to design, build and support large scale, customer facing systems that push the boundaries of what is possible at Cloudflare&#39;s edge computing platform. You will drive projects from idea to release, delivering solutions at all layers of the software stack to empower the Cloudflare customers.</p>
<p>Requisite Skills</p>
<ul>
<li>2-5 years professional software engineering experience</li>
<li>Experience using Cloudflare Workers or Pages</li>
<li>Must have strong experience with Javascript and Typescript</li>
<li>Experience working in frontend frameworks such as React</li>
<li>Experience with SQL and common relational database systems such as PostgreSQL</li>
<li>Experience with Kubernetes or similar deployment tools</li>
<li>Product mindset and comfortable talking to customers and partners</li>
<li>Experience delivering projects end-to-end – gathering requirements, writing technical specifications, implementing, testing, and releasing</li>
<li>Comfortable managing multiple projects simultaneously</li>
<li>Able to participate in on an on-call shift</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience with Go</li>
<li>Experience with metrics and observability tools such as Prometheus, Grafana</li>
<li>Experience scaling systems to meet increasing performance and usability demands</li>
<li>Knowledge of OAuth and building integrations with third-parties</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloudflare Workers, Pages, Javascript, Typescript, React, SQL, PostgreSQL, Kubernetes, Product mindset, Project management, Go, Prometheus, Grafana, OAuth, Third-party integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/5733639</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>83aa996d-190</externalid>
      <Title>Senior Software Engineer, Data Center Infrastructure Tooling</Title>
      <Description><![CDATA[<p>We&#39;re building one of the world&#39;s largest AI-focused cloud infrastructure platforms. As a senior backend engineer on this team, you&#39;ll help design, build, and own the data layer, APIs, and services that power our tools.</p>
<p>The goal is to build bespoke software to model our infrastructure at both a physical and logical level to drive planning, coordination, automation, of some of the most advanced AI datacenters.</p>
<p>You&#39;ll work closely with frontend engineers to bring rich user experiences built on top of your backends, and own how these services are deployed and run in production including scaling, redundancy and monitoring.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building data models and APIs that capture the complexity of datacenter infrastructure</li>
<li>Creating high-throughput API services in Go (gRPC, GraphQL, and/or REST) that support the data density and interaction speed the frontend demands</li>
<li>Building the backend architecture from the ground up, including service structure, data access patterns, caching strategy, and API contracts designed to scale with the team and product scope</li>
<li>Integrating with internal/external systems and data sources that feed infrastructure planning, ensuring the platform reflects real-world state and planned builds accurately</li>
<li>Deployment and operational infrastructure for the services you build, including Kubernetes manifests, CI/CD pipelines, observability, and reliability practices</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Strong proficiency in Go</li>
<li>Deep experience with relational databases, specifically PostgreSQL and CockroachDB</li>
<li>Experience designing and building APIs (gRPC, GraphQL, and REST) with attention to type safety, pagination, caching, filtering, and error handling</li>
<li>Proven experience of performance optimization on the backend</li>
<li>Familiarity with authentication, authorization, and backend security best practices for internal tooling</li>
<li>Experience owning deployment and operations for the services you build</li>
<li>Genuine curiosity about (or direct experience with) physical datacenter infrastructure</li>
<li>Strong data modeling instincts</li>
<li>Ability to work directly with infrastructure engineers to understand their workflows, identify pain points, and translate messy real-world processes into clean data models and APIs</li>
</ul>
<p>Nice to have includes direct experience with datacenter operations, infrastructure planning, or familiarity with DCIM tools like NetBox, Infrahub or Sunbird, experience with CockroachDB specifically, experience building systems that handle complex graph-like or hierarchical relational data, exposure to Infrastructure-as-Code, Terraform, or GitOps workflows, and experience with event-driven architectures, change data capture, or audit logging for compliance-sensitive systems.</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core, Act Like an Owner, Empower Employees, Deliver Best-in-Class Client Experiences, and Achieve More Together.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Go, PostgreSQL, CockroachDB, API design, Performance optimization, Authentication, Authorization, Backend security, Deployment and operations, Datacenter operations, Infrastructure planning, DCIM tools, Complex graph-like or hierarchical relational data, Infrastructure-as-Code, Terraform, GitOps workflows, Event-driven architectures, Change data capture, Audit logging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure platform built for AI innovation, trusted by leading AI labs, startups, and global enterprises.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658311006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>79704b10-ff6</externalid>
      <Title>Software Engineer, Cloudforce One</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We are looking for great engineers to join our Cloudforce One team, which is responsible for identifying and disrupting cyber threats ranging from sophisticated cyber criminal activity to nation-state sponsored advanced persistent threats (APTs).</p>
<p>As a Software Engineer on this team, you will own the entire software development lifecycle,from design and architecture to deployment and monitoring,for systems that serve both threat disruption and legal response efforts.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, run, and scale distributed tools and services that support both cyber threat disruption and Legal Response efforts.</li>
<li>Develop critical data pipelines and services to collect, analyze, and expose threat intelligence data for Cloudforce One analysts and Cloudflare customers, helping to identify Tactics, Techniques, and Procedures (TTPs) and Indicators of Compromise (IOCs).</li>
<li>Extend, improve, and maintain mission-critical Trust &amp; Safety solutions, including our CSAM Scanning Tool and other legal compliance pipelines.</li>
<li>Collaborate closely with Threat Operations, Trust &amp; Safety, Legal, and Product teams to understand goals and translate complex technical requirements into elegant, scalable solutions.</li>
</ul>
<p>Requirements</p>
<ul>
<li>At least 5 years of experience building large-scale software applications, preferably distributed systems.</li>
<li>Experience designing and integrating RESTful APIs and/or gRPC services.</li>
<li>Knowledge of SQL and common relational database systems such as PostgreSQL.</li>
<li>Prior experience writing production ready code in Go and/or Typescript.</li>
<li>Familiarity with Rust.</li>
<li>Excellent debugging and optimization skills.</li>
<li>Expertise in writing well tested code.</li>
<li>Interest in opportunities to be a technical mentor for teammates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Typescript, Rust, SQL, PostgreSQL, RESTful APIs, gRPC services, Distributed systems, Debugging, Optimization, Kafka, Redis, Kubernetes, Temporal, Web security, Industry standards for access control</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7309174</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fbd265ea-621</externalid>
      <Title>Software Engineer, Workers Deploy &amp; Config</Title>
      <Description><![CDATA[<p>Join the Workers Deploy &amp; Config team, the engine behind Cloudflare&#39;s unique serverless, edge-computing developer platform. This isn&#39;t just another backend role; you&#39;ll be building the critical, large-scale systems that empower developers worldwide to deploy everything - from a personal static site to full-stack applications serving millions of users.</p>
<p>In fact, you&#39;ll be building the very foundation that the rest of our developer platform,from Pages to R2,is built upon. You will tackle the complex challenges of distributed systems and high-traffic APIs every single day. Your mission? To build and scale the platform that lets customers upload, configure, and manage their Workers, ensuring it&#39;s incredibly fast, extremely resilient, and scales effortlessly.</p>
<p>You’ll drive projects from the initial idea to global release, delivering solutions at every layer of the stack. You’ll get to master a diverse and modern tech stack, writing high-performance Go, architecting APIs, optimizing storage interactions, building Workers with JavaScript/TypeScript, and managing it all on Kubernetes.</p>
<p>We&#39;re looking for engineers who are obsessed with the developer experience and thrive on solving large-scale problems with a track record to prove it. If you care as much about the quality of the user&#39;s experience as you do about the quality of your code, and you want to join a high-impact, fast-growing team helping to build a better Internet, we want to talk to you.</p>
<p>This role is about solving some of the most challenging problems in large scale, distributed systems. You&#39;ll be making a massive, direct impact on the broader developer community. Build &amp; Architect for Massive Scale - Own the core architecture of the Workers control plane, the system that deploys and configures millions of applications globally.</p>
<p>Proactively identify and eliminate performance bottlenecks, re-architecting critical services to handle exponential growth. Design and implement resilient database schemas and read/write patterns built to support exponential platform growth and long-term usage.</p>
<p>Evolve our services into a true developer platform, building the foundational capabilities that unlock future products.</p>
<p>Drive for Extreme Performance &amp; Reliability - Obsess over the developer experience, with a relentless focus on reducing API latency and increasing API availability.</p>
<p>Own the reliability of one of Cloudflare’s most critical, customer-facing systems. Take pride in production ownership by participating in an on-call rotation to ensure our platform is always on.</p>
<p>Lead, Collaborate, &amp; Innovate - Partner directly with Product Managers and customers to translate complex problems into simple, elegant, and scalable solutions.</p>
<p>Lead technical design from the ground up, collaborating with a brilliant, globally-distributed team of engineers.</p>
<p>Act as a mentor and knowledge-sharer, leveling up the entire team.</p>
<p>Constantly research, prototype, and introduce cutting-edge technologies to solve new classes of problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong experience using Go, Experience with Javascript and Typescript, Experience with metrics and observability tools such as Prometheus and Grafana, Experience with SQL and common relational database systems such as PostgreSQL, Experience with Kubernetes or similar deployment tools, Experience with distributed systems, Proven ability to drive projects independently, from concept to implementation – gathering requirements, writing technical specifications, implementing, testing, and releasing, Familiarity with implementing and consuming RESTful APIs, Experience with C++ or Rust, Experience scaling systems to meet increasing performance and usability demands, Experience working on a control and/or data plane, Experience using Cloudflare Workers or Pages, Experience working in frontend frameworks such as React, Experience managing interns or mentoring junior engineers, Product mindset and comfortable talking to customers and partners, Familiarity with GraphQL, Familiarity with RPC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7377424</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>887e0254-384</externalid>
      <Title>Engineering Manager (Platform - Identity)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Engineering Manager, you will lead the Identity Accounts team , the platform foundation that powers every user, organization, and account at Coinbase.</p>
<p>This is one of the most visible and business-critical engineering platforms at the company: your team’s services handle authentication, authorization, security settings, and account management for millions of customers across every Coinbase product.</p>
<p>You will manage engineers across three sub-teams (Foundations, Users Platformization, and Settings &amp; Account Management), drive roadmap execution in close partnership with your Tech Lead and Product Manager, and represent the team to 20+ internal product groups and key partners in Security, Risk, Compliance, and Design.</p>
<p>If you thrive at the intersection of deep technical problems and cross-functional leadership, this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and grow a team of engineers across backend, frontend, and site reliability , building a high-performing team through hiring, coaching, and career development.</li>
</ul>
<ul>
<li>Drive roadmap execution across three focused sub-teams: Foundations (authorization infrastructure), Users Platformization (decomposing the legacy monolith), and Settings &amp; Account Management (Security Settings 2.0, account navigation redesign).</li>
</ul>
<ul>
<li>Own reliability and operational excellence for 8+ mission-critical Tier-0/Tier-1 services , maintaining 99.99% uptime, championing engineering quality, and acting as quarterback during high-severity incidents.</li>
</ul>
<ul>
<li>Represent the team to internal product groups and key partners in Security, Risk, Compliance, and Design , building alignment and ensuring seamless integration support.</li>
</ul>
<ul>
<li>Partner with Product and your Tech Lead to define strategic roadmaps, prioritize initiatives, and translate complex constraints into simple, scalable platform solutions.</li>
</ul>
<ul>
<li>Champion engineering excellence , drive code and design reviews, set engineering standards, and build every capability to be composable and reusable across product lines (no bespoke, one-off integrations).</li>
</ul>
<ul>
<li>Accelerate internal customers by reducing new product team onboarding time to under 2 weeks and delivering excellent APIs, clear documentation, and strong integration support.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in software engineering, with at least 2 years of engineering management experience leading teams of 5+ engineers.</li>
</ul>
<ul>
<li>Proven track record shipping large-scale distributed systems serving millions of users in production.</li>
</ul>
<ul>
<li>Technical fluency in coding, system design, API architecture, and reliability tradeoffs , able to be hands-on when needed (writing/reviewing code, leading incidents, triaging bugs).</li>
</ul>
<ul>
<li>Strong communicator who writes clearly, builds organizational alignment, and can represent the team effectively to senior leadership and cross-functional partners.</li>
</ul>
<ul>
<li>Experience building and scaling high-performing engineering teams through hiring, developing, and promoting talent.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience in identity, authentication, authorization, or account management systems.</li>
</ul>
<ul>
<li>Prior experience leading a Platform team or similar domain with high internal customer dependency.</li>
</ul>
<ul>
<li>Familiarity with our stack: Go, gRPC, React, SpiceDB, Kubernetes, PostgreSQL, Kafka, Datadog.</li>
</ul>
<ul>
<li>Background building financial, high-reliability, or security systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>software engineering, engineering management, team leadership, technical fluency, coding, system design, API architecture, reliability tradeoffs, generative AI tools, copilots, identity, authentication, authorization, account management, Go, gRPC, React, SpiceDB, Kubernetes, PostgreSQL, Kafka, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service company that allows consumers and merchants to buy, sell, and store cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7731934</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d7e1a365-9dd</externalid>
      <Title>Principal Software Engineer II - Search Management - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search Management team. This globally-distributed team of experienced engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>As a Principal Software Engineer, you will be a full-time Elasticsearch contributor, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. You&#39;ll work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch.</p>
<p>You&#39;ll be an expert in several areas of Elasticsearch and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</p>
<p>You&#39;ll help us create the future of search within Elasticsearch - building a scalable search tier for our Serverless platform and writing search functionality in ES|QL, our new piped query language as two examples.</p>
<p>You&#39;ll work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself.</p>
<p>You&#39;ll write idiomatic modern Java -- Elasticsearch is 99.8% Java!</p>
<p>We&#39;re looking for someone with strong skills in core Java and a conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas. You should be comfortable developing collaboratively, giving and receiving feedback on code and approaches and APIs.</p>
<p>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</p>
<p>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</p>
<p>Bonus points if you&#39;ve built things with Elasticsearch before, worked in the search and information retrieval space, or have experience writing code for software-as-a-service or platforms-as-a-service.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,000-$243,600 CAD</Salaryrange>
      <Skills>core Java, standard library of data structures and concurrency constructs, newer features like lambdas, data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra, idiomatic modern Java, search and information retrieval space, software-as-a-service or platforms-as-a-service, collaborative development, code review, API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. They provide a cloud-based solution for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7699084</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b0fc94e-4e4</externalid>
      <Title>Staff Engineer - Fullstack</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Product</strong></p>
<p>Okta’s Auth0 is an easy-to-implement authentication and authorization platform designed by developers for developers. We make access to applications safe, secure, and seamless for the more than 100 million daily logins around the world. Our modern approach to identity enables this Tier-0 global service to deliver convenience, privacy, and security so customers can focus on innovation.</p>
<p><strong>The Team</strong></p>
<p>The Enablement team is at the core of expanding Auth0&#39;s capabilities for B2B customers, enabling seamless and automated user lifecycle management at a massive scale. We build and own the critical features that enterprises rely on to connect their identity sources to Auth0, including Enterprise APIs and our powerful self-service capabilities.</p>
<p>Our work is highly impactful, helping customers automate the creation, updating, and deactivation of users. This is a cornerstone for B2B SaaS applications that need to efficiently manage access for their own customers and partners. We work with NodeJS, TypeScript, PostgreSQL, MongoDB, and React to build these highly available and scalable services.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Help drive the architectural vision and strategy on the team to design and deliver powerful new enterprise APIs and functionality for our customers.</li>
</ul>
<ul>
<li>Orchestrate and lead major technical projects across teams as necessary.</li>
</ul>
<ul>
<li>Design, architect, code, and document large-scale distributed systems.</li>
</ul>
<ul>
<li>Serve as a subject matter expert on building scalable, reliable, and maintainable distributed systems.</li>
</ul>
<ul>
<li>Mentor and coach less experienced engineers on sound engineering practices and technical leadership.</li>
</ul>
<ul>
<li>Collaborate with Product, Security, and other engineering teams to define and continually improve our platform and architecture.</li>
</ul>
<ul>
<li>Drive technical decision-making while striving to hit the right balance between factors such as simplicity, flexibility, reliability, and performance.</li>
</ul>
<ul>
<li>Participate in the team&#39;s on-call rotations to make sure we offer our customers the best availability for our services.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience working on large-scale systems or services.</li>
</ul>
<ul>
<li>Solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</li>
</ul>
<ul>
<li>Experience working on projects that required close collaboration with external teams and have experience making those a success.</li>
</ul>
<ul>
<li>Solid previous experience with Node.js (JavaScript or TypeScript) to build scalable backend services and create and maintain public and internal APIs.</li>
</ul>
<ul>
<li>Experience building full-stack applications with an understanding of React.</li>
</ul>
<ul>
<li>Good understanding of SQL (PostgreSQL) and NoSQL (MongoDB) databases and how to optimise them for performance under load.</li>
</ul>
<ul>
<li>Experience with containerisation (Docker) and cloud environments like AWS and Azure.</li>
</ul>
<ul>
<li>Good mentor and communicator, and can explain complex concepts simply.</li>
</ul>
<p>#Hybrid</p>
<p>PID Number : P24578</p>
<p><strong>The Okta Experience</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, TypeScript, PostgreSQL, MongoDB, React, Docker, AWS, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta makes access to applications safe, secure, and seamless for over 100 million daily logins worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7593555</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9357429-033</externalid>
      <Title>Senior Software Engineer, Core Identity (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>We are looking for a Senior Software Engineer to join our Core Identity team. As a Senior Software Engineer, you will design, build, and operate the critical services that form the backbone of our identity platform.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Join a fast-paced and agile team spread remotely across Central Europe and US or Canada EST.</li>
<li>Build innovative features and standards that extend the capabilities of Auth0’s platform to help organizations securely innovate around the world.</li>
<li>Take ownership of the technical quality, security, reliability, and scalability of our systems.</li>
<li>Work in a highly collaborative and cross-functional environment, working with talented engineers and partners across Product, Security, Design, Architecture and QA to deliver features that delight our customers.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of professional software development experience, or equivalent.</li>
<li>Proficiency in building backend services with Node.js (JavaScript or TypeScript).</li>
<li>Experience designing, building, and operating distributed systems in a cloud environment (e.g., AWS, Azure).</li>
<li>A strong commitment to quality, with experience in various testing strategies (e.g., unit, integration, end-to-end).</li>
<li>A product-oriented mindset, with the ability to understand customer needs and work collaboratively to find effective solutions.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Fluent at using AI tools as part of the Product Development Life Cycle (PDLC).</li>
<li>Experience in the identity and access management (IAM) domain.</li>
<li>Knowledge of security engineering principles and application security best practices.</li>
<li>Experience working effectively in a distributed, remote-first team.</li>
</ul>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>Okta is an Equal Opportunity Employer.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, JavaScript, TypeScript, MongoDB, PostgreSQL, Redis, AWS, Azure, Identity and Access Management, AI tools, Security engineering principles, Application security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a developer-friendly identity platform that simplifies authentication and authorization for applications.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7314387</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3853e872-906</externalid>
      <Title>Senior Software Engineer, Tenant Protection (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>We are looking for a Senior Software Engineer to join our Tenant Protection team. As a member of this team, you will be responsible for designing and building features using technologies such as Node.js (JavaScript/Typescript), AWS, Azure, MongoDB, PostgreSQL, DynamoDB and Kubernetes.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and build features using technologies such as Node.js (JavaScript/Typescript), AWS, Azure, MongoDB, PostgreSQL, DynamoDB and Kubernetes</li>
<li>Lead the technical breakdown of complex requirements into clear, modular, and actionable engineering tasks, setting the standard for project clarity and velocity.</li>
<li>Drive and own the engineering estimation process for medium to large-sized initiatives, effectively managing risk and communicating technical trade-offs, timelines, and dependencies to engineering and product leadership.</li>
<li>Act as a key technical collaborator and influencer with internal stakeholders (e.g., Product Management, Security, Infrastructure), proactively aligning technical roadmaps and advocating for architectural changes that support long-term product vision.</li>
<li>Collaborate with industry-leading experts to implement the cutting-edge of Identity Protocols and Open Standards such as OpenID Connect, OAuth and SAML</li>
<li>Maintain and operate services at a large scale</li>
<li>Participate in scheduled on-call rotations</li>
<li>Mentor junior and mid-level engineers, providing guidance on system design, code quality, testing practices, and career development.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Practical experience using Node.js (JavaScript or Typescript) or a similar language</li>
<li>Experience working on systems that are highly reliable, maintainable and scalable.</li>
<li>Thorough understanding of application security and cloud security best practices</li>
<li>A systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive</li>
<li>A track record of influencing engineering strategy and driving complex, multi-quarter projects to completion across organisational boundaries.</li>
<li>Demonstrated ability to coach and grow other engineers in areas of system architecture, security, and operational rigour.</li>
<li>Experience with cloud environments (AWS and Azure preferred)</li>
<li>The ability to communicate your ideas and collaborate with other team members effectively in a remote working environment.</li>
<li>Experience designing, analysing, and troubleshooting large-scale distributed systems</li>
<li>Enthusiasm to work with and learn more about Identity Protocols such as OAuth, OIDC and SAML</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Existing knowledge of Identity Protocols such as OAuth, OIDC and SAML</li>
<li>Existing knowledge of security engineering and application security</li>
<li>Proven experience and understanding of architecture principles across infrastructure platforms, security, data, integration, and application layers</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, AWS, Azure, MongoDB, PostgreSQL, DynamoDB, Kubernetes, OpenID Connect, OAuth, SAML, Identity Protocols, Security Engineering, Application Security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a global company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7788244</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ece4c581-f94</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid-senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a provider of identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7774364</Applyto>
      <Location>New York, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c05140a-b31</externalid>
      <Title>Senior Software Engineer, Actions (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our high-calibre Extensibility Engineering team to help us continue to improve our ultra-low latency, secure, and scalable platform for untrusted code execution.</p>
<p>In this role, you will have the opportunity to significantly contribute to the foundation of Auth0&#39;s Ecosystem, realising a huge impact for our customers and partners.</p>
<p>As a member of Developer Experience - Extensibility Platform, you will:</p>
<ul>
<li>Design, architect, and document large-scale distributed systems.</li>
<li>Implement features across different layers of the stack using technologies such as Go, MongoDB, PostgreSQL, AWS, Azure, and Kubernetes.</li>
<li>Lead team discussions, mentor other engineers to become senior and improve the team’s productivity.</li>
<li>Contribute to improving Auth0&#39;s architecture, performance, observability, security controls, and best practices.</li>
<li>Collaborate with Product and Security teams to define and continually improve Auth0’s Extensibility platform and architecture.</li>
<li>Participate in our on-call rotations for troubleshooting production issues.</li>
</ul>
<p>Key Qualifications:</p>
<ul>
<li>5+ years of experience in software development, building distributed systems using Go.</li>
<li>Strong experience in API-driven applications using REST and/or gRPC.</li>
<li>Experience with packaging and distributing containerized applications using Docker and Kubernetes.</li>
<li>Experience with sandboxing untrusted code or tenant isolation (both preferred but not required).</li>
<li>A high bar for both code quality as well as quality of user experience.</li>
<li>Proven ability to collaborate with others to drive initiatives forward.</li>
</ul>
<p>Nice To Haves:</p>
<ul>
<li>Solid hands-on experience with Node.js in building scalable backend services</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Go, MongoDB, PostgreSQL, AWS, Azure, Kubernetes, API-driven applications, REST, gRPC, Docker, containerized applications, Node.js, scalable backend services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that specialises in authentication and authorization platforms.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743622</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9aa81908-c43</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions for businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437974</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb421081-0b2</externalid>
      <Title>Senior Software Engineer - Lifecycle Management</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Staff Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta’s identity management product.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Be a key contributor in the implementation of the LCM infrastructure</li>
<li>Troubleshooting customer issues and debugging from logs (Splunk, Syslogs, etc.)</li>
<li>Design &amp; Implement features with functional and unit tests along with monitoring and alerts</li>
<li>Conduct design &amp; code reviews, analysis and performance tuning</li>
<li>Quick prototyping to validate scale and performance</li>
<li>Provide technical leadership and mentorship to more junior engineers</li>
<li>Interface with Architects, QA, Product Owners, Engineering Services, Tech Ops</li>
<li>Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work</li>
</ul>
<p>Required knowledge, skills, and abilities:</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>4+ years of Software Development in Java, preferably significant experiences with Hibernate and Spring Boot</li>
<li>2+ years of development experience building services, internal tools and frameworks</li>
<li>2+ years experience automating and deploying large scale production services in AWS, GCP or similar</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures</li>
<li>Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB</li>
<li>Ability to work effectively with distributed teams and people of various backgrounds</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with server-side technologies including caching, asynchronous processing, and multi-threading.</li>
<li>Experience in TDD.</li>
<li>Experience with UI development or javascript frameworks</li>
<li>Knowledge of Identity and Access Management protocols and technologies: OAuth, OpenID Connect, SAML, SCIM</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Hibernate, Spring Boot, AWS, GCP, Caching, Stream Processing, Resilient Architectures, Relational Databases, MySQL, PostgreSQL, GraphDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is the leading independent provider of enterprise identity, enabling organisations to securely connect the right people to the right technologies at the right time.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6879868</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b271dfc9-021</externalid>
      <Title>Staff Software Engineer- Fullstack (Workflows)</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Staff Full-Stack Engineer to join the Integration Builder team within Okta Workflows. This team owns the core no-code surface that enables both internal teams and third-party developers (ISVs) to build powerful integrations and automation experiences with ease.</p>
<p>As a Staff Engineer, you&#39;ll lead initiatives that span front-end and back-end services , delivering performant, secure, and scalable features. You&#39;ll help define architecture, drive implementation, and collaborate closely with Design, PM, and Platform teams. You&#39;ll also work directly with our technical architects to help shape what we build , and how we build it.</p>
<p>This is a high-impact role in a growing, strategic product area with strong executive visibility.</p>
<p>Role Details:</p>
<ul>
<li>Design, build, and maintain end-to-end features using modern JavaScript and cloud-native technologies (React, Node.js, TypeScript, PostgreSQL).</li>
<li>Lead technical design for key initiatives, driving quality, scalability, and maintainability.</li>
<li>Build reusable and performant UI components for a best-in-class no-code builder experience.</li>
<li>Own services throughout their lifecycle , including implementation, testing, deployment, observability, and incident response.</li>
<li>Work closely with Product, Design, and Architecture to define the “what” and “how” of features, ensuring solutions are both user-friendly and technically sound.</li>
<li>Partner with infrastructure and platform teams to optimize system performance and reliability</li>
<li>Mentor and support engineers across the team, fostering a culture of quality, ownership, and continuous improvement.</li>
<li>Contribute to cross-functional planning, architectural reviews, and team-wide engineering practices.</li>
</ul>
<p>Experience:</p>
<ul>
<li>6+ years of experience building modern web applications in a full-stack environment.</li>
<li>Deep expertise in TypeScript, ReactJS, and Node.js (Express or similar frameworks).</li>
<li>Experience designing APIs, working with relational databases (PostgreSQL or similar), and building services in a distributed, cloud-based architecture.</li>
<li>A strong product mindset , you work well with Product and Design and care about delivering intuitive and elegant user experiences.</li>
<li>Ability to collaborate closely with Architects to make smart technical tradeoffs, and drive alignment across teams.</li>
<li>Passion for craftsmanship and high engineering standards (testing, monitoring, documentation, scalability).</li>
<li>Excellent communication skills, with the ability to lead technical discussions and build consensus across functions.</li>
<li>A growth mindset and interest in mentoring others and upleveling the team.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Experience with PostgreSQL, Docker, and Kubernetes.</li>
<li>Exposure to low-code/no-code tools, workflow engines, or visual development platforms.</li>
<li>Interest in AI-assisted developer tooling or automation.</li>
</ul>
<p>Education and Training:</p>
<ul>
<li>Bachelor&#39;s in Computer Science, or relevant industry experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$154,000-$230,000 CAD</Salaryrange>
      <Skills>TypeScript, ReactJS, Node.js, PostgreSQL, JavaScript, Cloud-native technologies, APIs, Relational databases, Distributed, cloud-based architecture, Docker, Kubernetes, Low-code/no-code tools, Workflow engines, Visual development platforms, AI-assisted developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7087237</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>adb421df-976</externalid>
      <Title>Staff Software Engineer, End User Protection (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our End User Protection team. As a Staff Software Engineer, you will be part of a fast-paced, agile team.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Design and build features using technologies such as Node.js (JavaScript/Typescript), AWS, Azure, MongoDB, PostgreSQL, DynamoDB and Kubernetes</li>
<li>Lead the technical breakdown of highly complex, ambiguous requirements into clear, modular, and actionable engineering tasks, setting the standard for project clarity and velocity</li>
<li>Drive and own the engineering estimation process for large-scale initiatives, effectively managing risk and communicating technical trade-offs, timelines, and dependencies to engineering and product leadership</li>
<li>Drive cross-functional technical projects with other Auth0 and Okta engineering teams, ensuring alignment on service dependencies, security standards, and operational best practices</li>
<li>Act as a key technical collaborator and influencer with internal stakeholders (e.g., Product Management, Security, Infrastructure), proactively aligning technical roadmaps and advocating for architectural changes that support long-term product vision</li>
<li>Collaborate with industry-leading experts to implement the cutting-edge of Identity Protocols and Open Standards such as OpenID Connect, OAuth and SAML</li>
<li>Maintain and operate services at a high scale</li>
<li>Participate in scheduled on-call rotations</li>
<li>Mentor senior and mid-level engineers, providing guidance on system design, code quality, testing practices, and career development. Foster a culture of technical excellence and collaborative ownership.</li>
</ul>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>Practical experience using Node.js (JavaScript or Typescript) or a similar language</li>
<li>Experience working on systems that are highly reliable, maintainable and scalable</li>
<li>Thorough understanding of application security and cloud security best practices</li>
<li>A systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive</li>
<li>A track record of influencing engineering strategy and driving complex, multi-quarter projects to completion across organisational boundaries</li>
<li>Demonstrated ability to coach and grow other engineers in areas of system architecture, security, and operational rigour</li>
<li>Experience with cloud environments (AWS and Azure preferred)</li>
<li>The ability to communicate your ideas and collaborate with other team members effectively in a remote working environment</li>
<li>Experience designing, analysing, and troubleshooting large-scale distributed systems</li>
<li>Enthusiasm to work with and learn more about Identity Protocols such as OAuth, OIDC and SAML</li>
</ul>
<p>In addition to your technical skills, you will also need to have excellent communication and interpersonal skills, as well as the ability to work effectively in a remote team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, AWS, Azure, MongoDB, PostgreSQL, DynamoDB, Kubernetes, application security, cloud security, Identity Protocols, Open Standards, OAuth, OIDC, SAML, system design, code quality, testing practices, career development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7821930</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>59d0a1d0-1f4</externalid>
      <Title>Intermediate Backend Engineer, SRM: Security Platform Management</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on our next-generation Security Platform Management team, you will help build GitLab&#39;s enterprise security solutions from the ground up.</p>
<p>You&#39;ll design and develop greenfield backend services that close competitive gaps and position GitLab as a single platform for enterprise security, directly shaping how thousands of organisations understand and manage their security posture at scale.</p>
<p>Working closely with a distributed team of 8 engineers, a Product Manager, and a UX designer across the US, Israel, and India, you&#39;ll own critical backend systems and APIs that power capabilities like a new Security Manager role system, asset inventory with posture sharing, logical asset gathering with statistics, and unified configuration for GitLab&#39;s security tools.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Building a new Security Manager role system to centralise and streamline security administration</li>
</ul>
<ul>
<li>Creating assets inventory and posture sharing capabilities, including logical asset gathering and unified configuration for all GitLab security tools</li>
</ul>
<p>Design and develop next-generation Security Platform Management capabilities that strengthen GitLab&#39;s enterprise security offering</p>
<p>Build and optimise scalable backend services and data models in Ruby on Rails with PostgreSQL for large volumes of security data</p>
<p>Develop and maintain robust REST and GraphQL APIs that power security workflows across the GitLab platform</p>
<p>Collaborate with Infrastructure, Policies, and Security Insights teams to deliver cross-functional security features end to end</p>
<p>Implement and refine unified configuration mechanisms for GitLab&#39;s suite of security tools to simplify management at scale</p>
<p>Work within focused, feature-specific squads to deliver high-impact, well-tested functionality with minimal context switching</p>
<p>Contribute to technical design decisions, code reviews, and standards that shape the architecture of GitLab&#39;s security platform</p>
<p>Proficiency in backend development with Ruby on Rails, including building and maintaining production services</p>
<p>Experience designing and optimising PostgreSQL schemas and queries for large-scale data</p>
<p>Experience building and maintaining REST and GraphQL APIs that integrate with complex products or platforms</p>
<p>Familiarity with Git-based workflows and continuous integration and delivery (CI/CD), ideally using GitLab</p>
<p>Knowledge of Elasticsearch and NoSQL database technologies, or interest in learning and applying them</p>
<p>Experience working in collaborative, cross-functional teams with product management and design</p>
<p>Ability to work autonomously in an all-remote, asynchronous environment while staying aligned with team goals</p>
<p>Interest in security domains and in building scalable, reliable solutions for enterprise customers, with openness to transferable experience from related areas</p>
<p>The Security Platform Management team sits within GitLab&#39;s Security Risk Management area and is responsible for building new platform capabilities that help enterprises understand, manage, and improve their security posture inside GitLab.</p>
<p>Our team consists of 8 backend engineers, 1 Product Manager, and 1 UX designer distributed across the US, Israel, and India, and we collaborate asynchronously and organise into feature-specific squads to deliver focused outcomes.</p>
<p>We own initiatives such as the new Security Manager role system, security assets inventory with posture sharing, logical asset gathering and statistics, and unified configuration for GitLab&#39;s security tools.</p>
<p>Our main opportunity is greenfield: we are designing and shipping net-new enterprise security solutions rather than maintaining legacy systems, and defining how large organisations use GitLab to secure applications at global scale.</p>
<p>The base salary range for this role&#39;s listed level is currently for residents of the United States only.</p>
<p>This range is intended to reflect the role&#39;s base salary rate in locations throughout the US.</p>
<p>Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, alignment with market data, and geographic location.</p>
<p>The base salary range does not include any bonuses, equity, or benefits.</p>
<p>See more information on our benefits and equity.</p>
<p>Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.</p>
<p>United States Salary Range $98,000-$210,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$98,000-$210,000 USD</Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, REST and GraphQL APIs, Git-based workflows, Continuous integration and delivery (CI/CD), Elasticsearch and NoSQL database technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8443325002</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b267407d-022</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Staff Full-Stack Engineer to join the Flow builder team within Okta Workflows. This team owns the core no-code canvas that enables both internal teams and our customers to build powerful automation experiences with ease.</p>
<p>As a Staff Engineer, you&#39;ll lead initiatives that span front-end and back-end services , delivering performant, secure, and scalable features. You&#39;ll help define architecture, drive implementation, and collaborate closely with Design, PM, and Platform teams. You&#39;ll also work directly with our technical architects to help shape what we build , and how we build it.</p>
<p>This is a high-impact role in a growing, strategic product area with strong executive visibility.</p>
<p>Role Details:</p>
<ul>
<li>Design, build, and maintain end-to-end features using modern JavaScript and cloud-native technologies (React, Node.js, TypeScript, PostgreSQL).</li>
<li>Lead technical design for key initiatives, driving quality, scalability, and maintainability.</li>
<li>Build reusable and performant UI components for a best-in-class no-code builder experience.</li>
<li>Own services throughout their lifecycle , including implementation, testing, deployment, observability, and incident response.</li>
<li>Work closely with Product, Design, and Architecture to define the “what” and “how” of features, ensuring solutions are both user-friendly and technically sound.</li>
<li>Partner with infrastructure and platform teams to optimize system performance and reliability.</li>
<li>Mentor and support engineers across the team, fostering a culture of quality, ownership, and continuous improvement.</li>
<li>Contribute to cross-functional planning, architectural reviews, and team-wide engineering practices.</li>
</ul>
<p>Experience:</p>
<ul>
<li>8+ years of experience building modern web applications in a full-stack environment.</li>
<li>Deep expertise in TypeScript, ReactJS, and Node.js (Express or similar frameworks).</li>
<li>Experience designing APIs and building robust services at scale in a distributed, cloud-based architecture.</li>
<li>Experience with PostgreSQL, Docker, and Kubernetes.</li>
<li>Experience delivering elegant, enterprise-grade user experiences by partnering with Product and Design teams in a fast-paced, agile environment.</li>
<li>Ability to collaborate closely with Architects to make smart technical tradeoffs, and drive alignment across teams.</li>
<li>Passion for craftsmanship and high engineering standards (testing, monitoring, documentation, scalability).</li>
<li>Excellent communication skills, with the ability to lead technical discussions and build consensus across functions.</li>
<li>A growth mindset and interest in mentoring others and upleveling the team.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to low-code/no-code tools, workflow engines, or visual development platforms.</li>
<li>Interest in AI-assisted developer tooling or automation.</li>
</ul>
<p>Education and Training:</p>
<ul>
<li>Bachelor’s in computer science, or relevant industry experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, ReactJS, Node.js, PostgreSQL, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7155588</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc887f35-1b1</externalid>
      <Title>Senior Backend Engineer (Ruby on Rails), Plan: Knowledge</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer (Ruby on Rails) on the Plan: Knowledge group, you&#39;ll help shape how teams plan, document, and share knowledge in GitLab. You&#39;ll build and improve the backend systems behind Wiki, Pages, Markdown, and text editors, while also helping design AI-powered capabilities such as the planner agent and Model Context Protocol (MCP) integrations that connect GitLab&#39;s GraphQL APIs with external tools.</p>
<p>In this role, you&#39;ll work closely with frontend engineers, Product, UX, and Security to create reliable, scalable systems that support both technical and non-technical users across GitLab&#39;s planning experience. As part of GitLab&#39;s AI-first culture, you&#39;ll also use the Duo Agent Platform in your daily workflow to improve productivity and support faster iteration.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Building AI agents such as the planner agent to support planning and knowledge management workflows</li>
<li>Architecting MCP integrations that expose GitLab GraphQL APIs to external AI tools and platforms</li>
</ul>
<p>As a senior engineer, you&#39;ll lead backend architecture for Wiki, Pages, Markdown, and text editor capabilities used across GitLab. You&#39;ll design and build AI agents that support planning and knowledge management workflows. You&#39;ll also architect MCP integrations that connect GitLab GraphQL APIs with external AI platforms and tools.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading backend architecture for Wiki, Pages, Markdown, and text editor capabilities used across GitLab</li>
<li>Designing and building AI agents that support planning and knowledge management workflows</li>
<li>Architecting MCP integrations that connect GitLab GraphQL APIs with external AI platforms and tools</li>
<li>Driving improvements in reliability and performance across application code, PostgreSQL queries, Redis usage, and background jobs</li>
<li>Developing and evolving GraphQL APIs that are clear for frontend engineers and support scalable product experiences</li>
<li>Collaborating with frontend engineers, Product, UX, and Security to break down complex work into shippable iterations</li>
<li>Mentoring engineers through code review, technical discussions, and shared backend best practices</li>
<li>Supporting incident response and production debugging, then turning learnings into lasting system improvements</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Strong experience building and maintaining backend applications with Ruby on Rails, including core components such as ActiveRecord and Redis</li>
<li>Experience designing and supporting GraphQL APIs with attention to usability, maintainability, and performance</li>
<li>Knowledge of PostgreSQL query design, optimization, and scaling in high-traffic production systems</li>
<li>Experience building or integrating AI agents, intelligent workflows, or related platform capabilities</li>
<li>Familiarity with Model Context Protocol (MCP) or similar patterns for connecting APIs to external tools and platforms</li>
<li>Ability to investigate production issues, debug complex systems, and improve reliability over time</li>
<li>Experience leading technical decisions, mentoring engineers, and contributing to engineering standards across a team</li>
<li>Clear communication and cross-functional collaboration skills, with openness to candidates who bring transferable experience from adjacent backend or platform work</li>
</ul>
<p>The Plan: Knowledge group owns core knowledge management experiences in GitLab, including Wiki, Pages, Markdown, and Text Editors, and is expanding those foundations with AI-powered capabilities such as the planner agent and MCP-based integrations. The team includes 6 engineers and works with a Product Manager, Engineering Manager, Product Designer, and Technical Writer.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, GraphQL, PostgreSQL, Redis, AI agents, Model Context Protocol, API design, Usability, Maintainability, Performance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8455304002</Applyto>
      <Location>Remote, Americas; Remote, APAC; Remote, EMEA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3299844-c42</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Opportunity</strong></p>
<p>The Migration Services team builds the critical, data-driven services that seamlessly move customers across environments in real-time. We are looking for a Senior Software Engineer who is passionate about crafting elegant solutions to complex distributed systems problems. You will be a key player in driving innovation, collaborating with architects and product managers to build and own the crucial infrastructure that underpins the Auth0 ecosystem. If you are excited by the prospect of making a massive impact, we want to hear from you!</p>
<p><strong>What You&#39;ll Achieve</strong></p>
<ul>
<li>Build for scale. You will develop, and operate highly scalable, data-intensive services, demonstrating code craftsmanship and an eye for detail.</li>
<li>Master the data stream. You&#39;ll leverage streaming technologies and implement advanced change data capture (CDC) strategies to ensure the secure, reliable, and efficient transfer of data.</li>
<li>Drive operational excellence. Through continuous monitoring and performance tuning, you will enhance the reliability of our migration processes and participate in our team&#39;s on-call rotation to ensure our services are always on.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Proven engineering background. With 3+ years of experience in fast-paced, agile environments, you have a proven track record of shipping high-quality software.</li>
<li>Database familiarity. You possess a strong understanding of database fundamentals and have hands-on experience with datastores like MongoDB and PostgreSQL.</li>
<li>Go is your go-to. You have a strong proficiency in Golang or optionally, in node.js.</li>
<li>A passion for reliability. You have interest and experience in reliability engineering, with familiarity with observability and incident management.</li>
<li>Collaborative skills. Your excellent written and verbal communication skills enable you to collaborate effectively with cross-functional and geo-dispersed teams.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with distributed streaming platforms like Kafka.</li>
<li>Familiarity with concepts in the IAM (Identity and Access Management) domain.</li>
<li>Experience with cloud providers (AWS, Azure) and container technologies such as Kubernetes and Docker.</li>
</ul>
<p>#Hybrid</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, MongoDB, PostgreSQL, Distributed systems, Reliability engineering, Observability, Incident management, Kafka, IAM, Cloud providers, Container technologies, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7809897</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f52c9cf9-ea5</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>About the team As part of our vision to Free Everyone to Safely Connect to any Technology, Okta is investing to help B2B / Enterprise SaaS companies engage with their customers by leveraging the Okta Integration Network (OIN) platform. This involves building and enhancing the user experience for these SaaS companies (ISVs - Independent Software Vendors) when they onboard their applications into the OIN.</p>
<p>We are constantly thinking about fostering network effects on the existing OIN Platform by designing for reach, scale and extensibility. Join us in building the next generation of a streamlined Partner journey within Okta, as we increase its footprint to even broader capabilities.</p>
<p>About the Role We are looking for a Senior Software Engineer to join an innovative and fast moving full-stack team to build a new set of developer-facing products to achieve this mission. We welcome personalities that are self-driven, think outside-the-box, take pride in shipping high quality software and most importantly kind.</p>
<p><strong>Job Duties and Responsibilities:</strong></p>
<ul>
<li>Analyze/Refine Requirements with Product Management and other stakeholders by asking the right questions and driving clarity.</li>
<li>Work with user experience designers and architects to scope and plan engineering efforts and dependencies.</li>
<li>Develop secure and reusable components to enable other teams to easily implement UIs with rich and consistent look and feel.</li>
<li>Develop APIs and SDKs that developers love. The target audience for this team’s roadmap is the developers working at the B2B Enterprise SaaS companies.</li>
<li>Have a high bar for security, test-driven development, design reviews and code reviews while harboring a sense of urgency.</li>
<li>Define long-term observability and reliability metrics for the systems/features that they own.</li>
</ul>
<p><strong>Required Skills, Attitude and Knowledge:</strong></p>
<ul>
<li>Have 5-7 years of software development experience.</li>
<li>Proficient in at least one of the backend languages and frameworks - Java, C#, Typescript (NodeJS).</li>
<li>Comfortable in React or similar front-end UI stack (Angular, Vue)</li>
<li>Demonstrable knowledge of HTTP fundamentals with strong API Design skills.</li>
<li>Have experience working with at least one of the database technologies - MySQL, Redis, or PostgreSQL.</li>
<li>Experience with distributed systems patterns including caching, asynchronous processing etc.</li>
<li>Track record of delivering work incrementally to get feedback and iterating over solutions.</li>
<li>Have an infectious enthusiasm and bring the right attitude to the team regarding ownership, accountability, attention to detail, and customer focus.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Have built fault-tolerant &amp; scalable integrations to third-party services.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C#, Typescript, React, API Design, MySQL, Redis, PostgreSQL, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7593685</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6984004d-b3f</externalid>
      <Title>Intermediate Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab with assurance by building and supporting the deployment tooling, infrastructure, and automation behind how GitLab is installed, upgraded, and operated.</p>
<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (GET), and the GitLab Operator to improve reliability, security, and scalability in production-grade environments. This is a hands-on role where you&#39;ll partner with Distribution Engineers, Site Reliability Engineers, Release Managers, Security, and Development teams to make self-managed GitLab easier to use across a wide range of platforms.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolve Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support new GitLab features and architectures</li>
</ul>
<ul>
<li>Improve installation, upgrade, and validation automation for large-scale self-managed GitLab deployments</li>
</ul>
<p>Maintain and improve the Omnibus GitLab package so GitLab components work reliably in self-managed deployments.</p>
<p>Develop and support GitLab Helm Charts for scalable, production-ready Kubernetes deployments.</p>
<p>Enhance the GitLab Environment Toolkit (GET) and validated reference architectures used by enterprise and internal users.</p>
<p>Support and extend the GitLab Operator for Kubernetes-native lifecycle management of GitLab installations.</p>
<p>Improve the installation, upgrade, and day-to-day operating experience across supported self-managed platforms.</p>
<p>Collaborate with Security to address vulnerabilities and strengthen secure defaults and configurations across the deployment stack.</p>
<p>Build and maintain automation and continuous integration and continuous deployment pipelines that validate deployment tooling across Omnibus, Charts, GET, and the Operator.</p>
<p>Partner with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features and keep user-facing documentation accurate and useful.</p>
<p>Experience building and maintaining backend services in production environments, especially in deployment, infrastructure, or platform tooling.</p>
<p>Practical knowledge of Kubernetes operations, including authoring and maintaining Helm charts.</p>
<p>Proficiency with Ruby and Go, along with scripting skills to automate workflows and tooling.</p>
<p>Familiarity with Terraform and infrastructure as code practices across cloud and on-premises environments.</p>
<p>Hands-on experience with relational databases, especially PostgreSQL, including performance and reliability considerations.</p>
<p>Understanding of secure, scalable, and supportable deployment practices, along with observability tools such as Prometheus and Grafana.</p>
<p>Experience collaborating in large codebases and distributed teams, including writing clear user-facing documentation and implementation guides.</p>
<p>Openness to learning new technologies and applying transferable skills across different parts of the GitLab deployment stack.</p>
<p>The Upgrades team is part of GitLab Delivery and delivers GitLab to self-managed users through supported, validated deployment tooling. The team maintains Omnibus GitLab, Helm Charts, the GitLab Operator, and the GitLab Environment Toolkit (GET) to help self-managed users deploy GitLab securely and reliably across diverse environments. You&#39;ll join a distributed group of backend engineers that works asynchronously across time zones and collaborates closely with Site Reliability Engineering, Release, Security, and Development teams. The team is focused on improving installation and upgrade workflows, strengthening automation and security, and helping self-managed customers run GitLab successfully at any scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Go, Kubernetes, Helm charts, Terraform, infrastructure as code, PostgreSQL, relational databases, observability tools, Prometheus, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by over 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463951002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aae5c27d-20b</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7436028</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>549fc0bc-10b</externalid>
      <Title>Software Architect, Lifecycle Management</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Okta is an enterprise-grade identity management platform, built from the ground up in the cloud and delivered with an unwavering focus on customer success. With Okta, organisations can manage access across any application, person or device. Whether the people are employees, partners or customers or the applications are in the cloud, on premises or on a mobile device, Okta helps organisations become more secure, make people more productive, and maintain compliance.</p>
<p>The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on-premises applications, directories, and identity management systems.</p>
<p>We are looking for an experienced Principal Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta’s identity management product.</p>
<p>Ideal candidate should be Hands-on expert developer in Java who is deeply technical with a passion for building high-quality, secure, and performant applications and frameworks. Demonstrable experience leading technical projects involving more than 20 engineers across multiple workstreams Excited by the opportunity to work on cutting-edge security and identity management challenges and are a thought leader who can drive technical strategy and mentor other engineers.</p>
<p>A collaborative individual with excellent communication skills, capable of working with cross-functional teams to deliver on a shared vision. Not just be a builder; but a force multiplier who can create frameworks and solutions that enable other teams to be more productive.</p>
<p>This role is to build, design solutions, and maintain our platform for scale. The ideal candidate is someone who has experience building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure.</p>
<p>Job Duties And Responsibilities</p>
<ul>
<li>Work with senior engineering team in major development projects, design and implementation</li>
<li>Lead the architectural design and implementation of new features and services, with a focus on scalability, performance, and security.</li>
<li>Collaborate with product managers, architects, and other engineering teams to define the technical strategy and lead the prototyping of software components.</li>
<li>Directly oversee and coordinate complex technical initiatives involving 20+ engineers, ensuring alignment across disparate sub-teams</li>
<li>Drive a culture of engineering excellence and continuous improvement, with a focus on robust testing, monitoring, and operational excellence.</li>
<li>Stay up-to-date with the latest industry trends and technologies in identity, security, and distributed systems.</li>
<li>Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work.</li>
</ul>
<p>Required Knowledge, Skills, And Abilities</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure</li>
<li>15+ years of Software Development in Java, preferably significant experience with Hibernate and Spring Boot</li>
<li>A deep understanding of design patterns, scalability patterns, security engineering, and object-oriented principles.</li>
<li>4+ years experience automating and deploying large scale production services in AWS, GCP or similar</li>
<li>Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures.</li>
<li>Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB</li>
<li>Strong communication skills and the ability to work across functions, distributed teams.</li>
<li>Lead and mentor junior engineers</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with server-side technologies including caching, asynchronous processing, and multi-threading.</li>
<li>Experience with security best practices and threat modeling</li>
<li>Knowledge of Identity and Access Management protocols and technologies: OAuth, OpenID Connect, SAML, SCIM</li>
</ul>
<p>Education</p>
<ul>
<li>B.E. Computer Science or equivalent</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Hibernate, Spring Boot, design patterns, scalability patterns, security engineering, object-oriented principles, AWS, GCP, caching, stream processing, resilient architectures, relational databases, MySQL, PostgreSQL, GraphDB, communication skills, leadership skills, mentoring skills, server-side technologies, asynchronous processing, multi-threading, security best practices, threat modeling, Identity and Access Management protocols, OAuth, OpenID Connect, SAML, SCIM</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is an enterprise-grade identity management platform that provides directory services, single sign-on, strong authentication, provisioning, workflow, and built-in reporting.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7771673</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9e76f9cf-4c8</externalid>
      <Title>Senior Software Engineer - Billing</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Department</p>
<p>Cloudflare’s Billing Engineering Team is at the heart of every product launch, campaign, and initiative that Cloudflare undertakes. We build and maintain critical systems for billing, payments, usage metering, aggregation, invoicing and revenue recognition , powering billions in revenue and serving millions of customers.</p>
<p>Currently we&#39;re rebuilding our entire billing platform, designing a metering and aggregation layer that scales effortlessly while ensuring financial accuracy. This is high-impact, high-stakes work that touches all Cloudflare’s cutting-edge products like AI, Zero Trust, Edge Compute, Bot Management, DDoS Protection, etc.</p>
<p>As a Senior Systems Engineer, you’ll lead a team of talented, collaborative engineers working across Cloudflare’s ecosystem. You’ll navigate multiple high-profile projects, foster a culture of proactive communication and continuous learning, and drive technical excellence.</p>
<p>If you thrive on solving hard challenges at the intersection of financial infrastructure and distributed systems, this is your opportunity to make a massive impact while growing with us.</p>
<p>What You’ll Do</p>
<p>We are looking for an energetic team-focused engineer who is growth mindset oriented, able to drive their work from inception, requirements definition, technical specification phase, development, testing and go live. You will work on a range of transactional microservices written in Go. You will be involved in helping to maintain our operational excellence by triaging, solving various inbound tickets related to issues across services billing maintains.</p>
<p>As you grow within the team you will be given opportunities to own bigger initiatives and lead projects from start to finish solo or as part of a smaller team.</p>
<p>Our Tech Stack</p>
<p>Modern container-based microservice architecture. Technologies we use include Docker, Go (golang), PostgreSQL, Redis, Kafka, Kubernetes, Temporal and the usual Unix/Linux tools and workflows.</p>
<p>We strive to build reliable, fault-tolerant systems that can operate at Cloudflare’s scale.</p>
<p>Desirable Skills and Knowledge</p>
<p>BS+ in Computer Science or equivalent experience</p>
<p>7+ years professional experience as a developer/engineer</p>
<p>Knowledge of Golang or desire to learn it</p>
<p>Solid understanding of RESTful APIs and service security</p>
<p>Working knowledge of SQL and relational databases such as PostgreSQL or MySQL</p>
<p>Experience with modern Unix/Linux development and runtime environment</p>
<p>Experience implementing secure and highly-available distributed systems/microservices</p>
<p>Familiarity with event-driven architecture</p>
<p>Experience with API tooling and standards (Swagger/OpenAPI, OAuth/JWT)</p>
<p>Strong interpersonal and communication skills with a bias towards action</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Docker, PostgreSQL, Redis, Kafka, Kubernetes, Temporal, Unix/Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7282689</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>982dd81e-416</externalid>
      <Title>Principal Database Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>
<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>
</ul>
<ul>
<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>
</ul>
<ul>
<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>
</ul>
<ul>
<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>
</ul>
<ul>
<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>
</ul>
<ul>
<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>
</ul>
<ul>
<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>
</ul>
<ul>
<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>
</ul>
<ul>
<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>
</ul>
<ul>
<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>
</ul>
<ul>
<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>
</ul>
<ul>
<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>
</ul>
<ul>
<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>
</ul>
<ul>
<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>
</ul>
<ul>
<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$157,900-$338,400 USD</Salaryrange>
      <Skills>PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8231379002</Applyto>
      <Location>Remote, EMEA; Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8fc80897-0ec</externalid>
      <Title>Intermediate Backend Engineer,  SSCS: Supply Chain</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the SSCS Add-On team at GitLab, you&#39;ll help build a dedicated software supply chain security feature for regulated enterprise organisations.</p>
<p>In this role, you&#39;ll contribute to capabilities that help customers control software dependencies, verify artifact integrity, and identify malicious packages before they reach production.</p>
<p>Your work will sit at the intersection of backend engineering, product integration, and security-focused development.</p>
<p>You&#39;ll build in Ruby on Rails, work alongside Go services as needed, and help connect Add-On functionality with GitLab&#39;s existing security scanning experience so findings are surfaced consistently for users.</p>
<p>Because the team is small, you&#39;ll have meaningful influence on implementation details, team practices, and the product experience.</p>
<p>This role is part of GitLab&#39;s all-remote, async-first, values-driven environment, where clear written communication and thoughtful coordination across time zones are essential.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Implement well-scoped backend features across the Add-On&#39;s supply chain security product, including package policy integrations, ingestion pipeline improvements, signing and verification support, and reliability-focused work, delivering maintainable code on agreed timelines and meeting team-defined delivery commitments.</li>
</ul>
<ul>
<li>Build and maintain integrations between Add-On functionality and GitLab&#39;s existing software composition analysis scanning infrastructure so findings appear consistently and accurately in merge request security reports, reducing integration issues and supporting a reliable user experience.</li>
</ul>
<ul>
<li>Write and maintain comprehensive automated test coverage, including RSpec and integration tests, to improve test reliability, reduce regressions, and support safe, consistent releases as the codebase grows.</li>
</ul>
<ul>
<li>Take on work across multiple feature areas as priorities evolve, contributing as a generalist where the team needs support most.</li>
</ul>
<ul>
<li>Participate actively in code review by giving thoughtful, actionable feedback and incorporating feedback constructively into your own work to help maintain code quality and reduce rework.</li>
</ul>
<ul>
<li>Contribute clear internal documentation for the features and behavior you ship so teammates can support, extend, and troubleshoot the product effectively.</li>
</ul>
<ul>
<li>Coordinate with adjacent Software Supply Chain Security teams, including Dependency Firewall and Malware Database, as the Add-On brings together capabilities from across GitLab, helping deliver aligned functionality and smoother cross-team execution.</li>
</ul>
<ul>
<li>Collaborate effectively in an async-first environment across global time zones, including occasional off-hours overlap when needed, to keep work moving and decisions documented clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Backend development experience with the ability to deliver maintainable production code.</li>
</ul>
<ul>
<li>Solid proficiency in Ruby on Rails and strong PostgreSQL fundamentals.</li>
</ul>
<ul>
<li>Familiarity with Golang, or a willingness to learn and work across both Ruby on Rails and Go.</li>
</ul>
<ul>
<li>Strong testing discipline, including experience with RSpec or an equivalent testing framework.</li>
</ul>
<ul>
<li>Clear, direct written communication skills and experience collaborating with distributed teammates in asynchronous workflows.</li>
</ul>
<ul>
<li>Ability to manage scoped work independently, communicate progress clearly, and adjust as team priorities shift.</li>
</ul>
<ul>
<li>Interest in package ecosystems such as npm, Maven, PyPI, or OCI containers, or adjacent experience that helps you ramp in this domain.</li>
</ul>
<ul>
<li>Interest in software supply chain security, dependency management, DevSecOps, or security-adjacent product development, with the ability to apply security considerations in backend development work.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The SSCS Add-On team is part of GitLab&#39;s Software Supply Chain Security stage and is focused on building a commercial offering that addresses real supply chain security challenges for enterprise customers.</p>
<p>The team works on capabilities that combine multiple parts of the GitLab product into a more complete security solution for organisations with strong compliance and risk management needs.</p>
<p>The work is both technically interesting and strategically important.</p>
<p>The team is building in a space shaped by fast-moving threats, evolving customer requirements, and close coordination with nearby teams across the broader security area.</p>
<p>That combination creates an environment where engineers can contribute to product direction while solving practical backend challenges in a visible part of GitLab&#39;s platform.</p>
<p>For more on how related teams work, see Team Handbook Page.</p>
<p><strong>How GitLab Supports Full-Time Employees</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, Golang, PostgreSQL, RSpec, testing discipline, package ecosystems, software supply chain security, dependency management, DevSecOps, security-adjacent product development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8480565002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>99aa7ac0-2c6</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As the Senior Manager of Data Streaming Services, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\nResponsibilities:\n\n- Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n\n- Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n\n- Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n\n- Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n\n- Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\nWhat you&#39;ll bring:\n\n- Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n\n- Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n\n- Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n\n- Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\nBonus Points:\n\n- Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n\n- Experience with databases such as PostgreSQL and MongoDB.\n\n- Experience with distributed streaming platforms like Kafka.\n\n- Familiarity with concepts in the IAM (Identity and Access Management) domain.\n\n- Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n\n- Experience building reliable, high-availability platforms for enterprise SaaS applications.\n\nTo learn more about our Total Rewards program please visit: https://rewards.okta.com/us\n\nThe annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$266,000 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$194,000-$266,000 CAD</Salaryrange>
      <Skills>engineering leadership, team management, technical architecture, distributed systems, project management, agile development, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, IAM, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides a platform for authentication and authorization services.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7735781</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2eb95095-49a</externalid>
      <Title>Intermediate Backend Engineer, SSCS: AI Governance</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the AI Governance team at GitLab, you&#39;ll help build a paid product designed for regulated enterprise organisations that need to audit, govern, and demonstrate compliance for AI agent usage inside GitLab.</p>
<p>This is product work with direct customer impact. You&#39;ll contribute to features that support visibility into how AI agents and related tools are used, and you&#39;ll help lay the foundation for governance controls that enterprise customers rely on.</p>
<p>You&#39;ll join a small team with clear product direction, technical guidance from experienced backend engineers, and meaningful ownership from the start.</p>
<p>This role is well suited for an engineer with experience in backend development who writes solid tests and wants to grow by shipping real features in an evolving product area.</p>
<p>You&#39;ll work in GitLab&#39;s all-remote, asynchronous environment, collaborating across teams as the AI Governance roadmap continues to expand.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement well-scoped backend features across the AI Governance product area, including event normalisation utilities, storage layer enhancements, API endpoint additions, export support, and registry integrations, delivering production-ready work that ships on schedule.</li>
</ul>
<ul>
<li>Build and maintain automated test coverage for your work using RSpec or equivalent tools to improve reliability and support safe, consistent releases.</li>
</ul>
<ul>
<li>Grow your knowledge of AI governance, agent-related product architecture, and integration patterns through hands-on delivery and teamwork so you can contribute more effectively as the roadmap evolves.</li>
</ul>
<ul>
<li>Work closely with senior and staff engineers to deliver solutions that are reliable, maintainable, and aligned with the product direction and release goals.</li>
</ul>
<ul>
<li>Work asynchronously with cross-functional partners and nearby engineering teams working on related governance and AI capabilities to help maintain smooth delivery across teams.</li>
</ul>
<ul>
<li>Take ownership of your scoped work and deliver with a high level of follow-through in a fast-moving product area, closing tasks with clear status updates and consistent execution.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Demonstrated backend development experience building and shipping production features.</li>
</ul>
<ul>
<li>Proficiency with Ruby on Rails and solid fundamentals in PostgreSQL.</li>
</ul>
<ul>
<li>Experience building and maintaining automated test coverage with RSpec or an equivalent testing framework.</li>
</ul>
<ul>
<li>Experience communicating clearly in writing with teammates in an async environment.</li>
</ul>
<ul>
<li>Demonstrated ability to drive scoped work through completion and follow through on commitments.</li>
</ul>
<ul>
<li>Experience with, or exposure to, audit event systems, telemetry pipelines, or compliance-focused tooling.</li>
</ul>
<ul>
<li>Experience learning new technical domains and applying that understanding to product development.</li>
</ul>
<ul>
<li>Additional experience with GraphQL APIs, event-driven architecture patterns, Python, or data-focused databases such as ClickHouse.</li>
</ul>
<p>About the team:</p>
<p>You&#39;ll join the AI Governance team within GitLab&#39;s Secure, Scale, and Compliance area. We focus on helping organisations gain visibility into and govern AI usage inside GitLab.</p>
<p>Our work spans two broad problem spaces: visibility, such as audit events, usage tracking, and observability, and policy controls, such as controls that help protect projects and meet compliance requirements.</p>
<p>We are building this team alongside a parallel AI Governance team, with both groups contributing to different parts of a fast-changing roadmap.</p>
<p>You&#39;ll work with a distributed group of engineers and collaborate with adjacent AI and Continuous Delivery teams as we integrate governance capabilities more deeply into the platform.</p>
<p>It&#39;s an interesting team for engineers who want to work on emerging product challenges at the intersection of AI, compliance, and large-scale enterprise software.</p>
<p>For more on how related teams work, see Team Handbook Page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, RSpec, GraphQL APIs, event-driven architecture patterns, Python, data-focused databases, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8480551002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0594b3f5-9a0</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>Join the Voice &amp; Video Postflight team as Twilio&#39;s next Senior Software Engineer.</p>
<p>This position is needed to build and evolve next-generation distributed systems that empower our customers through high-performance APIs. You will be tasked with solving the complex challenges inherent in supporting the massive scale of Twilio Voice, ensuring our infrastructure remains robust as we expand our capabilities.</p>
<p>As a Software Engineer, you will focus on the intersection of large-scale API development and advanced data systems. You will work on designing and implementing low-latency, highly scalable architectures that leverage modern database technologies to provide customers with seamless access to large-scale data.</p>
<p>Responsibilities:</p>
<p>Architect and implement next-generation distributed systems capable of handling the immense throughput and concurrency requirements of Twilio Voice.</p>
<p>Design low-latency, high-scale APIs that empower customers with real-time access to their data and communications infrastructure.</p>
<p>Optimize and manage distributed database environments, ensuring high availability and performance across high-volume data stores.</p>
<p>Own the full development lifecycle, from initial system design and prototyping to the continuous operation of 24x7 production services.</p>
<p>Collaborate across engineering teams to solve &#39;hard&#39; distributed systems problems, ensuring our API layer is both resilient and developer-friendly.</p>
<p>Qualifications:</p>
<p>A Master&#39;s or Bachelor&#39;s degree and 5+ years of experience in software engineering, with a focus on backend or infrastructure systems.</p>
<p>Expertise in Distributed Systems: A deep understanding of consistency models, partition tolerance, and the challenges of scaling stateful services.</p>
<p>Core Languages: Proficiency in Java, Spring, Dropwizard and a strong grasp of building RESTful APIs at scale.</p>
<p>Database Fundamentals: Practical experience working with and tuning PostgreSQL, Aurora or similar relational databases.</p>
<p>Cloud Infrastructure: Familiarity with deploying and managing large-scale services on AWS or GCP.</p>
<p>Operational Excellence: Comfortable operating in an agile environment with a &#39;you build it, you run it&#39; mentality.</p>
<p>Desired:</p>
<p>OLAP &amp; Big Data: Experience with ClickHouse or other column-oriented databases for high-performance analytical queries.</p>
<p>Infrastructure as a code: Familiarity with tools such as Terraform, Harness for managing systems.</p>
<p>Data Pipelines: Prior exposure to technologies like Kafka or Spark for moving and processing data between distributed systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed Systems, Java, Spring, Dropwizard, PostgreSQL, Aurora, AWS, GCP, Operational Excellence, OLAP &amp; Big Data, Infrastructure as a code, Data Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7785202</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>00b6fe58-4df</externalid>
      <Title>Senior Software Engineer, Enterprise Readiness</Title>
      <Description><![CDATA[<p>About Us\n\nAt Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.\n\nAs a Senior Software Engineer on the team, you will build the foundational services that enable the world’s largest organisations to run on Cloudflare. You will be responsible for the APIs, UIs, internal tooling, and admin platforms that help manage complex enterprise logic at scale.\n\nMore specifically, there will be a heavy focus on scaling and extending Organisations - the new abstraction for our largest customers and partners to manage Cloudflare. While this is a full-stack role, our roadmap for the coming year is weighted toward backend architecture and systems design.\n\nYou will spend your time helping design our data models, architecting high-performance services in Go, optimising our PostgreSQL layer, and ensuring our services are resilient within our Kubernetes ecosystem.\n\nYou won&#39;t just ship features; you will also own the &quot;operational excellence&quot; of your services. You’ll use tools like Jaeger, Sentry, and Kibana to troubleshoot complex distributed traces and ensure our platform remains highly available for our external and internal customers.\n\nYou will also rapidly expand your domain knowledge and ability to deliver change through AI tooling. Cloudflare is ramping up its support and infrastructure for AI development tools like OpenCode. Which, connected to everything safely possible with MCPs, is enabling engineers to have greater impact, faster than ever.\n\nCore Technologies\n\n- Backend: Go, PostgreSQL, Redis, PHP\n- Infrastructure: Kubernetes, Docker, Kafka\n- Frontend: React, TypeScript\n- Observability: Kibana, Elasticsearch, Jaeger, Sentry\n\nExamples of desirable skills, knowledge, and experience\n\n- Senior-Level Backend Expertise: 5+ years of experience building and scaling production-grade applications.\n- Systems Architecture: Proven experience designing distributed systems that are scalable, maintainable, and fault-tolerant.\n- Pragmatic Full Stack Ability: While your work will be weighted toward the backend, you are comfortable navigating a React/TypeScript codebase to build or improve UI components.\n- Agentic AI Development: You are excited about exploring and adopting the rapidly advancing AI tooling in your workflows.\n- Databases: Experience with SQL, including schema design, query optimisation, and serving globally distributed actors.\n- Observability-First Mindset: You don&#39;t consider a feature &quot;done&quot; until it&#39;s monitored. Experience using distributed tracing (Jaeger), error tracking (Sentry), and log analysis (Kibana/Elasticsearch) to debug production issues.\n- Cloud &amp; Containers: Practical experience deploying and managing services in Kubernetes and Docker.\n- Operational Ownership: You are comfortable participating in an on-call rotation and feel a sense of pride in maintaining high-uptime services.\n\nCompensation\n\nCompensation may be adjusted depending on work location.\n\nFor Denver based hires: Estimated annual salary of $168,000-$231,000\n\nEquity\n\nThis role is eligible to participate in Cloudflare’s equity plan.\n\nBenefits\n\nCloudflare offers a complete package of benefits and programs to support you and your family.\n\nOur benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!\n\nThe below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.\n\nHealth &amp; Welfare Benefits\n\n- Medical/Rx Insurance\n- Dental Insurance\n- Vision Insurance\n- Flexible Spending Accounts\n- Commuter Spending Accounts\n- Fertility &amp; Family Forming Benefits\n- On-demand mental health support and Employee Assistance Program\n- Global Travel Medical Insurance\n\nFinancial Benefits\n\n- Short and Long Term Disability Insurance\n- Life &amp; Accident Insurance\n- 401(k) Retirement Savings Plan\n- Employee Stock Participation Plan\n\nTime Off\n\nFlexible paid time off covering vacation and sick leave\n\nLeave programs, including parental, pregnancy health, medical, and bereavement leave\n\nWhat Makes Cloudflare Special?\n\nWe’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.\n\nFundamental to our mission to help build a better Internet is protecting the free and open Internet.\n\nProject Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.\n\nAthenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.\n\nSince the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.\n\n1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.\n\nThis is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.\n\nHere’s the deal - we don’t store client IP addresses never, ever.\n\nWe will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.\n\nSound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, PostgreSQL, Redis, PHP, Kubernetes, Docker, Kafka, React, TypeScript, Kibana, Elasticsearch, Jaeger, Sentry, Senior-Level Backend Expertise, Systems Architecture, Pragmatic Full Stack Ability, Agentic AI Development, Databases, Observability-First Mindset, Cloud &amp; Containers, Operational Ownership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network of services to protect and accelerate internet applications. It handles about 10% of HTTP requests on the internet today.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7521014</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b4363f1-4c3</externalid>
      <Title>Backend Engineer</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We&#39;re looking for a skilled Backend Engineer to join our team at xAI. As a Backend Engineer, you will work on our production systems that power the API.</p>
<p>About xAI:</p>
<p>xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence.</p>
<p>Responsibilities:</p>
<ul>
<li>Work on xAI&#39;s production systems that power the API</li>
<li>Design, implement, and maintain reliable and horizontally scalable distributed systems</li>
<li>Operate commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>
<li>Ensure service observability and reliability best practices</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Expert knowledge of either Rust or C++</li>
<li>Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems</li>
<li>Knowledge of service observability and reliability best practices</li>
<li>Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Knowledge of Python</li>
<li>Experience with Docker, Kubernetes, and containerized applications</li>
<li>Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping)</li>
<li>Hands-on experience with LLM APIs, embeddings, or RAG patterns</li>
<li>Track record of delivering user-facing software at scale</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Strong communication skills</li>
<li>Ability to concisely and accurately share knowledge with teammates</li>
<li>Flat organisational structure</li>
<li>Opportunity to work on challenging projects</li>
</ul>
<p>Note: This job description is a rewritten version of the original ad, focusing on the job requirements and responsibilities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, C++, PostgreSQL, Clickhouse, MongoDB, Python, Docker, Kubernetes, gRPC, LLM APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4991448007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e2392ba0-1bc</externalid>
      <Title>Staff Engineer AI Agents</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI in property management. We build AI agents that act as property managers, handling the full spectrum of interactions with both prospects and current residents on behalf of our clients.</p>
<p>Our agents don’t just assist human workflows; they own them end-to-end, operating across leasing, collections and resident communications. Zuma has ambitions to continue expanding into adjacent work activities in tangential areas of property management.</p>
<p>This is a rare chance to shape the future of how an entire industry operates , not in theory, but in production, at scale, touching real customers and physical assets every day. At Zuma, human and AI agents work side by side, and you&#39;ll help define what that collaboration looks like at its best.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own E2E projects that cross all areas of software development including full stack web apps, agentic AI solutions across multiple work activities, extensive integrations with PMS and CRM systems, infrastructure, and internal tooling.</li>
</ul>
<ul>
<li>Architect, build, and deploy production AI agents using modern agent frameworks, owning the full lifecycle from design to reliability in production.</li>
</ul>
<ul>
<li>Define the technical patterns and standards for how software is built across the engineering org , you will be setting the playbook others follow.</li>
</ul>
<ul>
<li>Strengthen our core systems , including our onboarding/configuration system, integration frameworks, and AI performance analytics infrastructure.</li>
</ul>
<ul>
<li>Collaborate directly with the VPE and product leadership to translate product vision into delivery, making high-stakes technical trade-offs with confidence.</li>
</ul>
<ul>
<li>Own system reliability, observability, and continuous improvement , defining how we measure, monitor, and iterate on our agents and web products in production.</li>
</ul>
<ul>
<li>Work across the stack (backend services, LLM orchestration, integrations, data pipelines, frontends) to ship agents and products that are robust and scalable.</li>
</ul>
<ul>
<li>Tame legacy code and lay down new foundations , patterns and architecture you create will be inherited by the engineers who come after you.</li>
</ul>
<ul>
<li>Be a close partner to the product and operations teams, turning their domain needs into intelligent automated workflows without requiring domain expertise upfront.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Prior experience at a startup or high-growth company; comfort shipping fast and iterating in production.</li>
</ul>
<ul>
<li>AWS experience with IaC (Terraform) and comfort working with infrastructure / dev ops.</li>
</ul>
<ul>
<li>Background in building self-serve platforms or integration infrastructure.</li>
</ul>
<ul>
<li>Experience with workflow automation platforms or business process orchestration.</li>
</ul>
<ul>
<li>Experience with telephony integrations (Twilio or similar) and building voice-capable agents or chatbots across text and voice channels.</li>
</ul>
<ul>
<li>Familiarity with speech-to-text, text-to-speech, or real-time audio streaming pipelines in production AI systems.</li>
</ul>
<ul>
<li>Classical ML experience , supervised/unsupervised learning, feature engineering, model training and evaluation outside of LLM contexts.</li>
</ul>
<p><strong>Our Stack</strong></p>
<ul>
<li>Python, TypeScript/Node.js</li>
</ul>
<ul>
<li>OpenAI, Anthropic</li>
</ul>
<ul>
<li>LangGraph, OpenAI Agents SDK, custom orchestration layers</li>
</ul>
<ul>
<li>AWS, AWS ECS, PostgreSQL, Redis</li>
</ul>
<ul>
<li>RealPage, Entrata, Yardi, and other property management systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180-220 per year</Salaryrange>
      <Skills>Python, TypeScript, OpenAI, Anthropic, LangGraph, OpenAI Agents SDK, AWS, AWS ECS, PostgreSQL, Redis, RealPage, Entrata, Yardi, AWS IaC (Terraform), Infrastructure / Dev Ops, Self-serve platforms, Integration infrastructure, Workflow automation platforms, Business process orchestration, Telephony integrations (Twilio), Voice-capable agents or chatbots, Speech-to-text, Text-to-speech, Real-time audio streaming pipelines, Classical ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a company that builds AI agents for property management, with a flagship product that is a multichannel leasing agent.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/16961f6d-ab02-469d-8f99-3a68bf5a5026</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>01845b18-90a</externalid>
      <Title>Tech Lead (CI &amp; Test Data Platform)</Title>
      <Description><![CDATA[<p>At Trunk, our mission is to help teams create high-quality software quickly. We&#39;ve helped engineerings teams at Google X, Zillow, and Brex to understand why their builds fail, which tests are flaky, and how to ship code faster without sacrificing reliability. AI has made writing code 10x faster, but shipping is still painfully slow. The bottleneck has shifted downstream - to merge conflicts, flaky tests, inconsistent code quality, and dozens of other frictions that drain productivity and morale. Engineering teams that can stay focused on designing, implementing, and delivering software will build magical, high-quality projects - and they&#39;ll be happier doing it. We&#39;re building a CI Reliability Platform that empowers teams to land code faster and develop happier.</p>
<p>Our founders launched Trunk in 2021 after designing, delivering, and scaling software at Uber, Google, YouTube, and Microsoft. We raised a $25M Series A led by Initialized Capital (Garry Tan) and a16z (Peter Levine), with investments from Haystack Ventures, Garage VC, and the founders of GitHub (Tom Preston-Werner), Apollo GraphQL (Geoff Schmidt), Algolia (Nicolas Dessaigne), and Peopl.ai (Oleg Rogynsky).</p>
<p>CI pipelines are black boxes. Engineers waste hours debugging failures that turn out to be flaky tests or infrastructure noise. Trunk makes this visible: what failed, why, and whether it&#39;s worth fixing.</p>
<p>The next wave is agentic. AI tools today hit a wall when code leaves the local environment. We&#39;re building the data layer that lets AI agents actually reason about CI: diagnosing failures, suggesting fixes, and eventually shipping code autonomously.</p>
<p>We&#39;re looking for a Tech Lead to own the data platform that powers Trunk&#39;s flaky test detection and CI analytics products. You&#39;ll design and build the systems that ingest millions of test runs per hour, surface actionable insights, and lay the foundation for AI-driven CI workflows.</p>
<p>We&#39;re at an inflection point. The scale challenges are real and growing. The AI/agentic future of development tooling is taking shape, and we&#39;re building the data infrastructure that makes it possible. If you want to work on hard systems problems with direct customer impact, this is the role.</p>
<p>As a Tech Lead, you will:</p>
<ul>
<li>Design and build the data pipelines, storage systems, and backend services that power Trunk&#39;s flaky test and CI products</li>
<li>Lead a team of engineers through complex distributed systems and data infrastructure challenges</li>
<li>Work directly with customers to understand their pain points and translate them into robust technical solutions</li>
<li>Drive architectural decisions for scale, reliability, and future AI/agentic integrations (MCP, semantic failure clustering, automated remediation)</li>
<li>Ship independently with high autonomy. We&#39;re a small team solving hard problems, and you&#39;ll have significant ownership</li>
</ul>
<p>We&#39;re looking for someone with:</p>
<ul>
<li>7+ years of backend/infrastructure engineering experience, with a focus on data processing pipelines and distributed systems</li>
<li>Experience leading teams of 2+ engineers on complex technical projects</li>
<li>Track record of building and operating systems at scale</li>
<li>Strong proficiency in Rust and Python; familiarity with TypeScript</li>
<li>Experience with our stack: PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster</li>
<li>Comfort with monitoring, observability, and debugging in distributed environments</li>
<li>Previous experience at a high-growth startup</li>
</ul>
<p>You&#39;re a good fit if:</p>
<ul>
<li>You&#39;re passionate about building high-quality, scalable systems and take pride in clean, maintainable code</li>
<li>You have deep experience with distributed systems, databases, and performance optimization</li>
<li>You&#39;re comfortable navigating large codebases and can ramp quickly on complex systems</li>
<li>You enjoy mentoring engineers and thrive in collaborative environments</li>
<li>Experience and intuition to zero in on root causes for bugs that can leave others stumped</li>
<li>You&#39;re self-directed, making sound technical decisions without waiting for detailed specs</li>
</ul>
<p>Our tech stack includes:</p>
<ul>
<li>Frontend: Typescript, React, Next.js, AWS</li>
<li>Backend: Typescript, Node, AWS</li>
<li>Data pipelines: Dagster, python, polars</li>
<li>CI/CD: GitHub Actions</li>
</ul>
<p>We offer:</p>
<ul>
<li>Unlimited PTO</li>
<li>Competitive salary and equity</li>
<li>Work-life balance</li>
<li>Lunch ordered in on us at the office on Wednesdays and Thursdays</li>
<li>Few meetings, so you can ship fast and focus on building</li>
<li>One Medical membership on us!</li>
<li>Top-notch medical, dental, vision, short-term disability, long-term disability, and life insurance</li>
<li>All insurance is 100% company-paid ($0 premiums) for employees and highly subsidized for dependents</li>
<li>FSA, HSA with company contributions, and pre-tax commuter benefits</li>
<li>401(k) plan</li>
<li>Paid parental leave (up to 12 weeks)</li>
</ul>
<p>The salary and equity range for this role are: $200-$245K and .3-.5%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200-$245K</Salaryrange>
      <Skills>Rust, Python, Typescript, PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Trunk</Employername>
      <Employerlogo>https://logos.yubhub.co/trunk.io.png</Employerlogo>
      <Employerdescription>Trunk is a software company that helps teams create high-quality software quickly.
It was founded in 2021 by former engineers from Uber, Google, YouTube, and Microsoft.</Employerdescription>
      <Employerwebsite>https://trunk.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/trunkio/32921dae-d3b1-4771-bb09-cac8a3b14d0c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>443218f3-d14</externalid>
      <Title>Full-Stack Software Engineer (New Grad) – Remote</Title>
      <Description><![CDATA[<p>About SpruceID</p>
<p>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions that give individuals control of their information while enabling governments and enterprises to deliver secure, interoperable services.</p>
<p>The Opportunity</p>
<p>We&#39;re looking for multiple Full-Stack Engineers to join our forward-deployed engineering team. You&#39;ll work alongside experienced engineers and directly with state governments, public sector partners, and enterprise clients to help design, build, and deploy impactful identity solutions. This is an ideal role for a recent graduate who wants to grow quickly while working on technology that matters. You&#39;ll gain hands-on experience across the full stack, learn from senior engineers, and contribute to systems that serve millions of people.</p>
<p>This is a fully remote role open to candidates based in the United States.</p>
<p>Responsibilities</p>
<ul>
<li>Collaborate with customer delivery leads, engineers, and UX designers on real-world deployments.</li>
<li>Build full-stack features for state governments and public sector partners, with guidance from senior engineers.</li>
<li>Learn to translate customer requirements into technical solutions and production-ready systems.</li>
<li>Develop backend services and web applications that meet public sector security, privacy, and accessibility standards.</li>
<li>Contribute to codebases spanning backend, mobile, and browser environments.</li>
<li>Participate in customer deployments and learn the full lifecycle of software delivery.</li>
<li>Engage with open identity standards and privacy-focused engineering practices.</li>
</ul>
<p>Our Commitment to Your Growth</p>
<p>We believe in investing in early-career engineers. As a new grad at SpruceID, you&#39;ll receive:</p>
<ul>
<li>Dedicated mentorship from senior engineers who will guide your technical development.</li>
<li>Structured onboarding to get you up to speed on our stack, our standards, and our customers.</li>
<li>Regular 1:1s and feedback to help you set and achieve career goals.</li>
<li>Opportunities to own projects as you grow, with appropriate support and guardrails.</li>
<li>Exposure to the full product lifecycle,from customer discovery to production deployment.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Software Engineering, or a related field (recent graduates welcome).</li>
<li>Foundational experience with backend development in a statically typed language (Rust, Go, C#, Java, or similar),coursework, internships, or personal projects count.</li>
<li>Demonstrated ability to learn quickly and work collaboratively.</li>
<li>Strong communication skills and genuine interest in working with customers and stakeholders.</li>
<li>Appreciation for open-source software, clean code, and thoughtful engineering.</li>
<li>Based in the U.S. (or willing to relocate without visa assistance - TN are ok) and excited to contribute to impactful public sector work.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Internship or project experience in full-stack development.</li>
<li>Exposure to cloud platforms (AWS, GCP, or Azure) through coursework or side projects.</li>
<li>Interest in digital identity, cryptography, data privacy, or security.</li>
<li>Familiarity with modern web frontends (React, TypeScript, or similar).</li>
<li>Familiarity with databases (PostgreSQL), APIs (REST or GraphQL), or CI/CD concepts.</li>
<li>Contributions to open-source projects or hackathon participation.</li>
<li>Coursework or interest in accessibility, usability, or inclusive design.</li>
<li>Any exposure to government technology, civic tech, or high-compliance environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$110,000-150,000 per year</Salaryrange>
      <Skills>backend development, statically typed language, Rust, Go, C#, Java, full-stack development, cloud platforms, AWS, GCP, Azure, digital identity, cryptography, data privacy, security, modern web frontends, React, TypeScript, databases, PostgreSQL, APIs, REST, GraphQL, CI/CD concepts, open-source software, clean code, thoughtful engineering, collaboration, communication, customers, stakeholders, government technology, civic tech, high-compliance environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>SpruceID</Employername>
      <Employerlogo>https://logos.yubhub.co/spruceid.com.png</Employerlogo>
      <Employerdescription>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions that give individuals control of their information while enabling governments and enterprises to deliver secure, interoperable services.</Employerdescription>
      <Employerwebsite>https://spruceid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/sprucesystems/c683a712-7a5a-4bed-b580-f899998f044a</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c3536285-729</externalid>
      <Title>Senior Full-Stack Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Full-Stack Engineer to join our forward-deployed engineering team. You&#39;ll work directly with state governments and public sector partners, and enterprise clients to design, build, and deploy impactful identity solutions.</p>
<p>This role blends hands-on software development, technical consulting, and customer success: ideal for someone who thrives at the intersection of technology and mission-driven impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and deploy full-stack solutions for state governments and public sector partners.</li>
<li>Collaborate with customer delivery leads, engineers, and UX designers to ensure successful deployments.</li>
<li>Translate customer requirements into technical architectures and production-ready systems.</li>
<li>Serve as a trusted technical advisor for partners adopting open identity standards and privacy best practices.</li>
<li>Build backend services and full-stack web or mobile apps that meet public sector security, privacy, and accessibility standards.</li>
<li>Contribute to Rust codebases that run across backend, mobile, and browser environments.</li>
<li>Manage customer deployments and provide post-launch technical support.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>2+ years of experience building backend systems in statically typed languages (Rust, Go, C#, or Java).</li>
<li>Strong background in modern web frontends (React, TypeScript, or similar) with an eye for accessibility and security.</li>
<li>Proven ability to lead cross-functional engineering efforts and deliver production-grade systems.</li>
<li>Strong appreciation for open-source software, standards-based design, and community-driven development.</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and DevOps practices.</li>
<li>Excellent communication skills and comfort working directly with customers or stakeholders.</li>
<li>Based in the U.S., excited to collaborate with state government partners.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience with digital identity, cryptography, data privacy, or blockchain technologies (e.g., Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect).</li>
<li>Familiarity with PostgreSQL, GraphQL, or RESTful API design and development.</li>
<li>Understanding of CI/CD pipelines, infrastructure as code, and automation using Terraform, or similar tools.</li>
<li>Exposure to mobile app development (React Native, Flutter, or similar frameworks).</li>
<li>Experience in security engineering, access control, federated identity, or PKI systems.</li>
<li>Prior work in public sector, government technology, or other high-compliance environments.</li>
<li>Interest in usability, accessibility (WCAG, Section 508), and inclusive product design.</li>
<li>Contributions to open-source projects or participation in digital identity standards bodies (W3C, DIF, IETF) is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C#, Java, React, TypeScript, Cloud infrastructure, DevOps practices, PostgreSQL, GraphQL, RESTful API design, CI/CD pipelines, Infrastructure as code, Automation, Terraform, Mobile app development, Security engineering, Access control, Federated identity, PKI systems, Digital identity, Cryptography, Data privacy, Blockchain technologies, Verifiable Credentials, Decentralized Identifiers, OAuth, OpenID Connect, Usability, Accessibility, Inclusive product design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>SpruceID</Employername>
      <Employerlogo>https://logos.yubhub.co/spruceid.com.png</Employerlogo>
      <Employerdescription>SpruceID builds privacy-preserving, standards-based digital identity and credentialing solutions for governments and enterprises.</Employerdescription>
      <Employerwebsite>https://spruceid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/sprucesystems/b6ed1d39-d3e4-454f-8d8c-a5a65d64651f</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0538b85b-262</externalid>
      <Title>Senior Full Stack Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Full Stack Engineer to design, develop, and maintain our web-based applications and systems. Leveraging your expertise, you will collaborate with cross-functional teams to implement innovative solutions that drive the functionality and performance of our autonomous surface vessels. You will work on complex challenges at the intersection of technology and defense, and play a significant role in the future of maritime operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain high-quality web applications using JavaScript, TypeScript, and React for both front-end and back-end components</li>
<li>Design and implement scalable and responsive user interfaces that meet the needs of various stakeholders, including operators and military personnel</li>
<li>Collaborate with product managers, designers, and other engineers to translate requirements into technical specifications and deliverables</li>
<li>Integrate third-party APIs and services to enhance application functionality and interoperability with external systems</li>
<li>Optimise application performance, security, and reliability through thorough testing, code reviews, and performance tuning</li>
<li>Stay current with emerging technologies, best practices, and industry trends to continuously improve our development processes and tools</li>
<li>Participate in Agile development methodologies, including sprint planning, daily stand-ups, and retrospectives, to ensure timely delivery of features and enhancements</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field; Master&#39;s degree preferred</li>
<li>5-10 years of experience as a Full Stack Engineer using JavaScript, TypeScript, and React</li>
<li>Solid understanding of software engineering principles, data structures, and algorithms</li>
<li>Experience with server-side development using Node.js and frameworks like Express.js</li>
<li>Familiarity with database systems such as MongoDB, MySQL, or PostgreSQL</li>
<li>Proficiency in version control systems (e.g., Git) and CI/CD pipelines</li>
<li>Basic proficiency in Rust</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care</li>
<li>Saronic pays 100% of the premium for employees and 80% for dependents</li>
<li>Time Off: Generous PTO and Holidays</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses</li>
<li>Retirement Plan: 401(k) plan</li>
<li>Stock Options: Equity options to give employees a stake in the company’s success</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</li>
</ul>
<p>Physical Demands:</p>
<ul>
<li>Prolonged periods of sitting at a desk and working on a computer.</li>
<li>Occasional standing and walking within the office.</li>
<li>Manual dexterity to operate a computer keyboard, mouse, and other office equipment.</li>
<li>Visual acuity to read screens, documents, and reports.</li>
<li>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies.</li>
<li>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages).</li>
</ul>
<p>Additional Information:</p>
<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, TypeScript, React, Node.js, Express.js, MongoDB, MySQL, PostgreSQL, Git, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for autonomous and intelligent maritime operations.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/8a8fdf3a-df17-4435-adc0-04bae83bd1c9</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d861d686-2ad</externalid>
      <Title>Senior Full Stack Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Full Stack Engineer to design, develop, and maintain our web-based applications and systems. Leveraging your expertise, you will collaborate with cross-functional teams to implement innovative solutions that drive the functionality and performance of our autonomous surface vessels.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain high-quality web applications using JavaScript, TypeScript, and React for both front-end and back-end components</li>
<li>Design and implement scalable and responsive user interfaces that meet the needs of various stakeholders, including operators and military personnel</li>
<li>Collaborate with product managers, designers, and other engineers to translate requirements into technical specifications and deliverables</li>
<li>Integrate third-party APIs and services to enhance application functionality and interoperability with external systems</li>
<li>Optimise application performance, security, and reliability through thorough testing, code reviews, and performance tuning</li>
<li>Stay current with emerging technologies, best practices, and industry trends to continuously improve our development processes and tools</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field; Master&#39;s degree preferred</li>
<li>5-10 years of experience as a Full Stack Engineer using JavaScript, TypeScript, and React</li>
<li>Solid understanding of software engineering principles, data structures, and algorithms</li>
<li>Experience with server-side development using Node.js and frameworks like Express.js</li>
<li>Familiarity with database systems such as MongoDB, MySQL, or PostgreSQL</li>
<li>Proficiency in version control systems (e.g., Git) and CI/CD pipelines</li>
<li>Basic proficiency in Rust</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care</li>
<li>Saronic pays 100% of the premium for employees and 80% for dependents</li>
<li>Time Off: Generous PTO and Holidays</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses</li>
<li>Retirement Plan: 401(k) plan</li>
<li>Stock Options: Equity options to give employees a stake in the company’s success</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</li>
</ul>
<p>Physical Demands:</p>
<ul>
<li>Prolonged periods of sitting at a desk and working on a computer.</li>
<li>Occasional standing and walking within the office.</li>
<li>Manual dexterity to operate a computer keyboard, mouse, and other office equipment.</li>
<li>Visual acuity to read screens, documents, and reports.</li>
<li>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies.</li>
<li>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages).</li>
</ul>
<p>Additional Information:</p>
<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, TypeScript, React, Node.js, Express.js, MongoDB, MySQL, PostgreSQL, Git, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for autonomous and intelligent maritime operations.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/302b1da8-aa5a-435e-bd51-8893b55f155f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>65c8442b-dd8</externalid>
      <Title>Full Stack Engineer</Title>
      <Description><![CDATA[<p>We are seeking a jar Full Stack Engineer responsible for designing, developing, and maintaining our web-based applications and systems. Leveraging your expertise, you will collaborate with cross-functional teams to implement innovative solutions that drive the functionality and performance of our autonomous surface vessels. You will work on complex challenges at the intersection of technology and defence, and play a significant role in the future of maritime operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain high-quality web applications using JavaScript, TypeScript, and React for both front-end and back-end components</li>
<li>Design and implement scalable and responsive user interfaces that meet the needs of various stakeholders, including operators and military personnel</li>
<li>Collaborate with product managers, designers, and other engineers to translate requirements into technical specifications and deliverables</li>
<li>Integrate third-party APIs and services to enhance application functionality and interoperability with external systems</li>
<li>Optimise application performance, security, and reliability through thorough testing, code reviews, and performance tuning</li>
<li>Stay current with emerging technologies, best practices, and industry trends to continuously improve our development processes and tools</li>
<li>Participate in Agile development methodologies, including sprint planning, daily stand-ups, and retrospectives, to ensure timely delivery of features and enhancements</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field; Master&#39;s degree preferred</li>
<li>Experience as a Full Stack Engineer using JavaScript, TypeScript, and React</li>
<li>Solid understanding of software engineering principles, data structures, and algorithms</li>
<li>Experience with server-side development using Node.js and frameworks like Express.js</li>
<li>Familiarity with database systems such as MongoDB, MySQL, or PostgreSQL</li>
<li>Proficiency in version control systems (e.g., Git) and CI/CD pipelines</li>
<li>Basic proficiency in Rust</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care</li>
<li>Saronic pays 100% of the premium for employees and 80% for dependents</li>
<li>Time Off: Generous PTO and Holidays</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses</li>
<li>Retirement Plan: 401(k) plan</li>
<li>Stock Options: Equity options to give employees a stake in the company’s success</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</li>
</ul>
<p>Physical Demands:</p>
<ul>
<li>Prolonged periods of sitting at a desk and working on a computer.</li>
<li>Occasional standing and walking within the office.</li>
<li>Manual dexterity to operate a computer keyboard, mouse, and other office equipment.</li>
<li>Visual acuity to read screens, documents, and reports.</li>
<li>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies.</li>
<li>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages).</li>
</ul>
<p>Additional Information:</p>
<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, TypeScript, React, Node.js, Express.js, MongoDB, MySQL, PostgreSQL, Git, Rust</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for autonomous and intelligent maritime operations.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/585ad79d-9f9d-45ea-84fd-bfe2ce1a9c9e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a40d099b-db6</externalid>
      <Title>Solutions Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for early members of our Sales team that can form deep partnerships with our prospects and customers to help them adopt and succeed on the next generation of database infrastructure.</p>
<p>As a Solutions Engineer, you will partner with Sales and Customer Engineering throughout the pre-sales and post-sales journey as the technical expert helping customers solve their most challenging database problems. You will lead technical discovery to match customers&#39; business and technical objectives with PlanetScale&#39;s offerings. You will design and execute proof of value timelines that deliver on agreed-upon business outcomes and success criteria. You will design database migration strategies and work hands-on with customers to execute migrations to PlanetScale&#39;s PostgreSQL and Vitess platforms. You will assess workloads, analyze performance requirements, and recommend architecture, sizing, and optimization strategies. You will build tools, scripts, and automation that accelerate migrations and improve customer onboarding. You will create educational content including documentation, guides, blog posts, workshops, and videos. You will collaborate with Product and Engineering teams to advocate for customer needs and shape the platform.</p>
<p>You have deep expertise in database systems including replication, high availability, sharding, performance tuning, and migration strategies. You are equally comfortable presenting architecture designs to executives and writing scripts to automate migration tasks. You thrive in customer-facing situations and translate technical concepts into business value for diverse audiences. You are self-motivated and can manage multiple engagements simultaneously with minimal oversight. You enjoy creating content and sharing knowledge through various formats. You are comfortable with occasional travelmaxcdn&lt; 20%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000 - $250,000 USD</Salaryrange>
      <Skills>MySQL, PostgreSQL, Vitess, database migration, performance tuning, troubleshooting, cloud computing, scripting, automation, AWS Database Migration Service, logical replication tools, Kubernetes, cloud-native architectures, infrastructure-as-code tools, open-source projects, public speaking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that provides a transactional database platform. It has received over $100M in venture financing and serves some of the most innovative companies in the world.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4052805009</Applyto>
      <Location>Remote - EMEA, Remote - NA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bf2f7e1a-d9d</externalid>
      <Title>Enterprise Support Engineer</Title>
      <Description><![CDATA[<p>Job Title: Enterprise Support Engineer</p>
<p>We are seeking an experienced Enterprise Support Engineer to join our core engineering team. As an Enterprise Support Engineer, you will advise and handle support requests from enterprise customers on the PlanetScale platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Advise and handle support requests from enterprise customers on the PlanetScale platform.</li>
<li>Become a customer-facing subject-matter expert for enterprise customers on the PlanetScale platform.</li>
<li>Identify product gaps in a customer-specific context and work with Technical Account Management, Engineering and Sales Engineering teams to prioritize and escalate them.</li>
<li>Be part of an on-call rotation for high-priority issues.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience supporting production databases and applications, preferably at scale.</li>
<li>Experience with database internals and performance tuning, specifically for PostgreSQL and MySQL databases.</li>
<li>Working knowledge of Kubernetes.</li>
<li>Strong ability to communicate and deal directly with customers, whether in email, Slack, video conference, or in person.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Knowledge of common application deployment platforms and frameworks, such as Python, Go, Node, PHP.</li>
<li>Experience with cloud platforms (AWS, GCP, Azure).</li>
<li>Knowledge of monitoring, observability, and debugging tools.</li>
<li>Contributions to open-source projects, especially in the database or infrastructure space.</li>
</ul>
<p>Why PlanetScale?</p>
<p>We&#39;re redefining how high-growth companies manage data at scale,and we work with some of the most exciting brands in gaming, consumer tech, and B2B SaaS. As a Software Engineer, you&#39;ll be at the core of building the platform that powers world-class apps used by hundreds of millions of users worldwide. PlanetScale is a profitable company with a philosophy centered around building small teams of p99 individuals and is recognized as one of the fastest-growing companies in America.</p>
<p>Total Compensation and Pay Transparency</p>
<p>An employee&#39;s total compensation consists of base salary + variable comp where appropriate + benefits + equity. A member of our Talent Acquisition team will be happy to answer any further questions when we engage with you to begin the interview process.</p>
<p>Salary Range: US $120,000 - $200,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>US $120,000 - $200,000</Salaryrange>
      <Skills>PostgreSQL, MySQL, Kubernetes, database internals, performance tuning, Python, Go, Node, PHP, cloud platforms, monitoring, observability, debugging tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that provides a database platform for high-growth companies.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4009926009</Applyto>
      <Location>Remote - NA, APAC, EMEA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bf3843ae-c72</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>JOB TITLE: Senior Software Engineer LOCATION: Remote, USA DEPARTMENT: Enterprise Engineering</p>
<p>Omada Health is a digital care provider that empowers people to achieve their health goals through sustainable behavioural change. We are on a mission to inspire and engage people in lifelong health, one step at a time.</p>
<p>Great software is the key to providing effective care at scale. We hire passionate, creative people and give them the autonomy to do great work. Our software engineers are comfortable dealing with high-level specifications, working independently and in small teams, and are heavy contributors in the product process from idea to production.</p>
<p>You work with empathy for your coworkers, stakeholders and users. You are excited to work cross-functionally with a variety of people and ideas. You work directly with stakeholders to design solutions and drive the technical decisions for delivery. You proactively keep up with technology trends and can assess technical trade-offs between solutions across service boundaries. You care about writing quality software and recognise that there are often many right answers. You are excited about the challenge of learning new technologies and context. You are enthusiastic about providing the best possible care for our members.</p>
<p><strong>What you’ll be doing:</strong></p>
<ul>
<li>Build and integrate a combination of home-grown and purchased tools to optimise our contracting, eligibility and billing processes. You will have a direct impact on business outcomes through the improvement of existing or implementation of new solutions in close partnership with product and business stakeholders.</li>
<li>Be an informal leader to the team by continuously identifying ways to improve, mentoring others, and removing ambiguity.</li>
<li>Leverage AI and ML tooling to deliver innovative solutions to complex problems.</li>
<li>Collaborate and influence others to shape future direction, based on your years of previous experience and technology research.</li>
<li>Lead large projects, anticipating infrastructure and architectural needs before they arise.</li>
<li>Research, adopt and advocate for new technologies.</li>
</ul>
<p><strong>What you need for this role:</strong></p>
<ul>
<li>7+ years of experience writing readable, tested, and efficient code</li>
<li>Familiarity with LLMs and GenAI best practices (for writing code and app features)</li>
<li>Experience with a modern front-end framework (React, Vue)</li>
<li>Experience with a modern back-end web framework (Rails, Python)</li>
<li>Experience with a relational database (PostgreSQL, MySQL)</li>
<li>Solid debugging and optimisation skills</li>
<li>Experience with APIs supporting mobile applications delivering rich user experiences</li>
<li>Expertise with SDLC processes and frameworks</li>
<li>Interest in learning new tools, languages, workflows, and philosophies to grow</li>
<li>Curiosity and care more about solving problems than being right</li>
<li>Excellent communication and collaboration skills (verbal and written)</li>
</ul>
<p><strong>Technologies we use:</strong></p>
<ul>
<li>AWS</li>
<li>Ruby</li>
<li>Rails</li>
<li>Postgres</li>
<li>Kafka</li>
<li>Docker</li>
<li>Kubernetes</li>
<li>Amplitude</li>
<li>Marketo</li>
<li>Salesforce</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Stock options</li>
<li>Remote first work from home culture</li>
<li>Flexible vacation to help you rest, recharge, and connect with loved ones</li>
<li>Generous parental leave</li>
<li>Health, dental, and vision insurance (and above market employer contributions)</li>
<li>401k retirement savings plan</li>
<li>Lifestyle Spending Account (LSA)</li>
<li>Mental Health Support Solutions</li>
<li>...and more!</li>
</ul>
<p>It takes a village to change health care. As we build together toward our mission, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as well as you consider Omada!</p>
<ul>
<li>Cultivate Trust.</li>
<li>Seek Context.</li>
<li>Act Boldly.</li>
<li>Deliver Results.</li>
<li>Succeed Together.</li>
<li>Remember Why We’re Here.</li>
</ul>
<p><strong>About Omada Health:</strong></p>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behaviour change elements for individuals managing chronic conditions. Omada’s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale.</p>
<p>With more than a decade of experience and data, and 29 peer-reviewed publications showcasing clinical and economic proof points, Omada’s approach is designed to improve health outcomes and contain costs. Our customers include health plans, pharmacy benefit managers, health systems, and employers ranging from small businesses to Fortune 500s. At Omada, we aim to inspire and empower people to make lasting health changes on their own terms.</p>
<p>For more information, visit: https://www.omadahealth.com/</p>
<p>Omada is thrilled to share that we’ve been certified as a Great Place to Work! Please click here for more information.</p>
<p>We carefully hire the best talent we can find, which means actively seeking diversity of beliefs, backgrounds, education, and ways of thinking. We strive to build an inclusive culture where differences are celebrated and leveraged to inform better design and business decisions. Omada is proud to be an equal opportunity workplace and affirmative action employer. We are committed to equal opportunity regardless of race, color, religion, sex, gender identity, national origin, ancestry, citizenship, age, physical or mental disability, legally protected medical condition, family care status, military or veteran status, marital status, domestic partner status, sexual orientation, or any other basis protected by local, state, or federal laws.</p>
<p>Below is a summary of salary ranges for this role in the following geographies:</p>
<ul>
<li>California, New York State and Washington State Base Compensation Ranges: $179,400 - $224,300*</li>
<li>Colorado Base Compensation Ranges: $171,600 - $214,500*</li>
<li>Other states may vary.</li>
</ul>
<p>This role is also eligible for participation in annual cash bonus and equity grants.</p>
<p>*The actual offer, including the compensation package, is determined based on multiple factors, such as the candidate&#39;s skills and experience, and other business considerations.</p>
<p>Please click here for more information on our Candidate Privacy Notice.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$179,400 - $224,300</Salaryrange>
      <Skills>7+ years of experience writing readable, tested, and efficient code, Familiarity with LLMs and GenAI best practices (for writing code and app features), Experience with a modern front-end framework (React, Vue), Experience with a modern back-end web framework (Rails, Python), Experience with a relational database (PostgreSQL, MySQL)</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a digital care provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7685483</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd4ea9f9-369</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Omada Health is on a mission to inspire and engage people in lifelong health, one step at a time.</p>
<p>We&#39;re seeking a Staff Software Engineer to lead the modernization, optimization, and scalability of Omada&#39;s B2B platform. This role is ideal for someone who combines deep technical expertise with strong leadership,someone eager to design for scale, mentor others, and influence technical direction across teams.</p>
<p>You&#39;ll play a central role in re-architecting complex legacy systems, designing high-performance data pipelines (batch and real-time), and ensuring our core B2B capabilities,file ingestion, marketing outreach, eligibility, and billing,are robust, performant, and ready for the next wave of growth.</p>
<p><strong>About You:</strong></p>
<p>You&#39;re a systems thinker who thrives on solving hard technical challenges at scale. You have a strong foundation in distributed systems, database performance, and architectural design patterns,and you naturally guide teams toward simpler, more scalable solutions.</p>
<p>You&#39;re both a technical expert and a connector, equally comfortable deep in the code or collaborating across disciplines. You&#39;re passionate about leading by example, mentoring others, and helping engineers across Omada level up their craft. You&#39;re also motivated by impact,building systems that help improve health outcomes for millions.</p>
<p><strong>What You&#39;ll Be Doing:</strong></p>
<ul>
<li>Lead architecture, system design and engineering efforts for high-scale, data-intensive B2B systems supporting eligibility, billing, marketing, and file ingestion.</li>
<li>Design and implement batch and real-time processing architectures that are reliable, observable, and performant.</li>
<li>Drive efforts in database performance optimization, schema design, and long-term scalability planning across multi-terabyte PostgreSQL and other persistent stores.</li>
<li>Partner closely with product, infrastructure, and operations teams to deliver resilient, maintainable systems that balance business needs with technical excellence.</li>
<li>Identify and lead engineering-wide initiatives that improve scalability, developer efficiency, or data quality.</li>
<li>Mentor and coach engineers at all levels, and actively contribute to Omada’s engineering community through design reviews, technical talks, and shared best practices.</li>
<li>Contribute to modern, cloud-forward architecture across multiple product domains, ensuring our systems are designed to evolve gracefully and scale efficiently.</li>
<li>Use and advocate for AI-assisted development tools (e.g., Cursor, Claude) to enhance individual and team productivity.</li>
<li>Champion a culture of quality, observability, and reliability through strong DevOps principles and continuous improvement.</li>
</ul>
<p>*</p>
<ul>
<li><strong>What You Need for This Role:</strong></li>
</ul>
<ul>
<li>10+ years of software engineering experience, with a significant portion spent on scalable systems architecture and performance optimization.</li>
<li>Proven success in re-architecting complex legacy platforms and implementing modern, maintainable solutions.</li>
<li>Strong programming experience with Ruby and Python, and comfort working across a modern stack (Rails, GraphQL, Django, Sidekiq).</li>
<li>Deep understanding of relational databases (PostgreSQL, MySQL), performance tuning, and data modeling.</li>
<li>Hands-on experience with both batch and streaming data pipelines (e.g., SQS, Kafka, Kinesis, Airflow).</li>
<li>Demonstrable mastery of API design, distributed systems, and cloud-native architecture (preferably AWS).</li>
<li>Fluency in CI/CD, containerization, and infrastructure-as-code (Docker, Kubernetes, Terraform).</li>
<li>Familiarity with monitoring and observability frameworks (Datadog, OpenTelemetry).</li>
<li>Excellent communication and collaboration skills, with a proven ability to influence and deliver through others.</li>
<li>Growth mindset and genuine curiosity about new technologies, tools, and team approaches.</li>
</ul>
<p>*</p>
<ul>
<li><strong>Technologies We Use:</strong></li>
</ul>
<ul>
<li>Ruby on Rails</li>
<li>Sidekiq</li>
<li>AWS Managed Datastores (RDS with PostgreSQL, Elasticache, ElasticSearch SNS/SQS)</li>
<li>GraphQL</li>
<li>Docker</li>
<li>Kubernetes</li>
</ul>
<p>*</p>
<ul>
<li><strong>Benefits:</strong></li>
</ul>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you rest, recharge, and connect with loved ones</li>
<li>Generous parental leave</li>
<li>Health, dental, and vision insurance (and above market employer contributions)</li>
<li>401k retirement savings plan</li>
<li>Lifestyle Spending Account (LSA)</li>
<li>Mental Health Support Solutions</li>
<li>...and more!</li>
</ul>
<p>*</p>
<ul>
<li><strong>It Takes a Village to Change Healthcare:</strong></li>
</ul>
<p>At Omada, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as well as you consider Omada!</p>
<ul>
<li>Cultivate Trust. We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.</li>
<li>Seek Context. We ask to understand and we build connections. We do our research up front to move faster down the road.</li>
<li>Act Boldly. We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.</li>
<li>Deliver Results. We reward impact above output. We set a high bar, we’re not afraid to fail, and we take pride in our work.</li>
<li>Succeed Together. We prioritize Omada’s progress above team or individual. We have fun as we get stuff done, and we celebrate together.</li>
<li>Remember Why We’re Here. We push through the challenges of changing healthcare because we know the destination is worth it.</li>
</ul>
<p>*</p>
<ul>
<li><strong>About Omada Health:</strong></li>
</ul>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions. Omada’s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale. With more than a decade of experience and data, and 29 peer-reviewed publications showcasing clinical and economic proof points, Omada’s approach is designed to improve health outcomes and contain costs. Our customers include health plans, pharmacy benefit managers, health systems, and employers ranging from small businesses to Fortune 500s. At Omada, we aim to inspire and empower people to make lasting health changes on their own terms. For more information, visit: https://www.omadahealth.com/</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Python, Ruby on Rails, GraphQL, Django, Sidekiq, PostgreSQL, MySQL, API design, distributed systems, cloud-native architecture, AWS, CI/CD, containerization, infrastructure-as-code, Docker, Kubernetes, monitoring and observability frameworks, Datadog, OpenTelemetry</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a digital care provider that empowers people to achieve their health goals through sustainable behavioral change.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7611424</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ca7b0871-868</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Job Overview</strong></p>
<p>Omada Health is a digital care provider that empowers people to achieve their health goals through sustainable behavioral change. We are on a mission to inspire and engage people in lifelong health, one step at a time.</p>
<p>We are looking for a software engineer to help drive us forward in achieving that goal.</p>
<p><strong>What You&#39;ll Be Doing</strong></p>
<ul>
<li>Work with product managers, designers and a diverse group of talented engineers to build the backends that power our mobile applications underpinning the overall experience for our members and the web applications that enable our providers to deliver world class digital healthcare.</li>
<li>Deliver high-quality web application code, maintaining site stability through code reviews and writing unit and integration tests, while implementing best practices for architecture, system design, and coding standards.</li>
<li>Dedicate 80-90% of your time to hands-on coding, serving as a technical leader and mentor to junior engineers by solving challenging programming and design problems.</li>
<li>Leverage AI tools in your workflow across all aspects of the software development lifecycle.</li>
<li>Lead large projects by anticipating infrastructure and architectural needs, and propose innovative AI solutions to complex problems.</li>
<li>Collaborate with AI experts to integrate AI into existing systems, leveraging their guidance as necessary.</li>
<li>Use your experience to influence and shape the future direction of projects and technologies, working collaboratively to adopt and advocate for new technological advancements.</li>
<li>Participate in our on-call rotation; triage and address reliability issues that come up in production, ensuring system stability and resolving critical problems as they arise.</li>
</ul>
<p><strong>What You Need for This Role</strong></p>
<ul>
<li>7+ years of experience writing readable, tested, and efficient code</li>
<li>Experience with a Ruby or Python</li>
<li>Experience with a relational database (PostgreSQL, MySQL)</li>
<li>Experience with designing scalable, maintainable and secure APIs</li>
<li>Experience with CI/CD pipelines</li>
<li>Familiarity with LLMs and GenAI best practices</li>
<li>Familiarity with AI development tools such as Cursor or Copilot</li>
<li>Familiarity with cloud infrastructure (AWS preferred), and deployment tools (Kubernetes, Docker)</li>
<li>Understanding of logging, monitoring and telemetry</li>
<li>Understanding of DevOps concepts and principles</li>
<li>Interest in learning new tools, languages, workflows, and philosophies to grow</li>
<li>Curiosity and care more about solving problems than being right</li>
<li>Excellent communication and collaboration skills (verbal and written)</li>
</ul>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Ruby on Rails</li>
<li>React</li>
<li>AWS (RDS with PostgreSQL, SQS)</li>
<li>GraphQL</li>
<li>Docker</li>
<li>Kubernetes</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you rest, recharge, and connect with loved ones</li>
<li>Generous parental leave</li>
<li>Health, dental, and vision insurance (and above market employer contributions)</li>
<li>401k retirement savings plan</li>
<li>Lifestyle Spending Account (LSA)</li>
<li>Mental Health Support Solutions</li>
</ul>
<p><strong>Cultivate Trust</strong></p>
<ul>
<li>We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.</li>
</ul>
<p><strong>Seek Context</strong></p>
<ul>
<li>We ask to understand and we build connections. We do our research up front to move faster down the road.</li>
</ul>
<p><strong>Act Boldly</strong></p>
<ul>
<li>We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.</li>
</ul>
<p><strong>Deliver Results</strong></p>
<ul>
<li>We reward impact above output. We set a high bar, we’re not afraid to fail, and we take pride in our work.</li>
</ul>
<p><strong>Succeed Together</strong></p>
<ul>
<li>We prioritize Omada’s progress above team or individual. We have fun as we get stuff done, and we celebrate together.</li>
</ul>
<p><strong>Remember Why We’re Here</strong></p>
<ul>
<li>We push through the challenges of changing health care because we know the destination is worth it.</li>
</ul>
<p><strong>About Omada Health</strong></p>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions. Omada’s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Python, PostgreSQL, MySQL, API design, CI/CD pipelines, LLMs, GenAI, Cursor, Copilot, cloud infrastructure, deployment tools, logging, monitoring, telemetry, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a digital care provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7711461</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>245477ba-29a</externalid>
      <Title>Senior Software Engineer - Stability</Title>
      <Description><![CDATA[<p>The Stability team at Mercury champions and improves observability. We&#39;ve helped define incident response. We have introduced and support robust background work processing. We monitor and build tooling around platform and database health.</p>
<p>As a Senior Software Engineer - Stability, you will lead projects end-to-end, drive technical projects from concept to production. You will define solutions, analyze tradeoffs, make critical decisions, and deliver software that works today and is sustainable for tomorrow.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Championing reliability by making technical choices that improve the reliability of Mercury&#39;s systems and making it easier to get reliability by default.</li>
<li>Measuring outcomes by defining and collecting metrics that show how your work creates value for the business.</li>
<li>Approaching code with craft by writing clear, testable, and maintainable code.</li>
<li>Building for quality and sustainability by designing extensible systems, making balanced decisions on tech debt, planning careful rollouts, and owning the quality of your work through post-launch monitoring.</li>
<li>Improving the developer experience by approaching problems with a product mindset, getting close to internal customers by supporting them and getting feedback from them.</li>
</ul>
<p>The ideal candidate for this role has expertise in PostgreSQL with query optimization, tuning, replication, pooling/proxying, or client-side libraries. They have worked with other data systems supporting a relational database: event streaming, OLAP, caches, etc. They have authored and operated Temporal workflows, are familiar with tracing and OpenTelemetry, and have learned by leading moderate-to-large technical projects, including planning, execution, and stakeholder management.</p>
<p>The salary range for this role is $166,600 - 250,900 for US employees and CAD $157,400 - 237,100 for Canadian employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,600 - 250,900 (US) | CAD $157,400 - 237,100 (Canada)</Salaryrange>
      <Skills>PostgreSQL, query optimization, tuning, replication, pooling/proxying, client-side libraries, Temporal workflows, tracing, OpenTelemetry</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury provides powerful banking services. It is a fintech company.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5969193004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e2350d04-53f</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, PostgreSQL, async patterns, cloud infrastructure, AWS, GCP, Azure, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/66241934-7138-4d7d-8b05-a211ec5d6e24</Applyto>
      <Location>Colombia</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd212aea-514</externalid>
      <Title>Backend Engineer, Agents</Title>
      <Description><![CDATA[<p>Hebbia is seeking a skilled Backend Engineer to join its Agents team. As a Backend Engineer, you will be responsible for building highly efficient software solutions that leverage the latest software and agentic solutions. You will integrate product experience with powerful distributed systems, protecting Hebbia&#39;s technical edge via elegant software design, efficient data communication, and sophisticated integrations.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on backend software engineering.</li>
<li>Proficiency in building backend and API systems using technologies such as Python, Java, or Go.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams.</li>
<li>Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Embraces rapid prototyping with an emphasis on user feedback.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<p>Bonuses:</p>
<ul>
<li>Experience building agentic systems or LLM enabled products.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<p>Compensation:
The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate&#39;s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, AWS, Kafka, ElasticSearch, PostgreSQL, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for leading asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584766005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2cf203a5-5c5</externalid>
      <Title>Platform Engineer, Document Intelligence</Title>
      <Description><![CDATA[<p>About Hebbia</p>
<hr>
<p>The AI platform for investors and bankers that generates alpha and drives upside.</p>
<p>Founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz, Hebbia powers investment decisions for BlackRock, KKR, Carlyle, Centerview, and 40% of the world’s largest asset managers. Our flagship product, Matrix, delivers industry-leading accuracy, speed, and transparency in AI-driven analysis. It is trusted to help manage over $30 trillion in assets globally.</p>
<p>We deliver the intelligence that gives finance professionals a definitive edge. Our AI uncovers signals no human could see, surfaces hidden opportunities, and accelerates decisions with unmatched speed and conviction. We do not just streamline workflows. We transform how capital is deployed, how risk is managed, and how value is created across markets.</p>
<p>Hebbia is not a tool. Hebbia is the competitive advantage that drives performance, alpha, and market leadership.</p>
<hr>
<p>The Team</p>
<hr>
<p>The Document Intelligence team at Hebbia builds cutting-edge AI solutions that transform how users discover and interact with billions of private and public documents. Our products, including the Hebbia’s Browse application, enable intelligent document exploration, powerful search capabilities, and deep insights extraction. We focus on developing advanced data ingestion and search technologies that deliver intuitive, explainable, and highly responsive experiences. Working closely with customers, our team continuously iterates to address real-world challenges and drive impactful, data-driven decisions. Our goal is to empower users by seamlessly turning vast and complex document repositories into actionable intelligence.</p>
<hr>
<p>The Role</p>
<hr>
<p>Platform engineering at Hebbia is about excellent, scalable enablement. You are responsible for the core distributed systems that power billions of tokens across millions of dollars of AUM. You will be responsible for deploying efficient systems and building software tightly coupled with state-of-the-art infrastructure/system design. Hebbia’s edge is built on operating on the edge of the tokenomics curve and you will serve as a key contributor in this area. We value engineers who think on their feet, innovate and can solve for exponential scale.</p>
<hr>
<p>Responsibilities</p>
<hr>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<hr>
<p>Who You Are</p>
<hr>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field. A strong academic background with coursework in data structures, algorithms, and software development is preferred.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on distributed systems and platform engineering.</li>
<li>Proficiency in building backend and distributed systems using technologies such as Python, Java, or Go.</li>
<li>Deep understanding of scalable system design, performance optimization, and resilience engineering.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Knowledge of workflow orchestration and execution platforms like Airflow, Temporal or Prefect.</li>
<li>Proven experience enabling observability patterns.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams. Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<hr>
<p>Bonuses:</p>
<ul>
<li>Experience building distributed systems leveraging technologies such as etcd or Apache Zookeeper.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<hr>
<p>Compensation</p>
<hr>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<hr>
<p>Life @ Hebbia</p>
<hr>
<ul>
<li>PTO: Unlimited</li>
<li>Insurance: Medical + Dental + Vision + 401K</li>
<li>Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late</li>
<li>Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent</li>
<li>Fertility benefits: $15k lifetime benefit</li>
<li>New hire equity grant: competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>backend and distributed systems, Python, Java, Go, scalable system design, performance optimization, resilience engineering, cloud platforms, AWS, Kafka, ElasticSearch, PostgreSQL, Redis, workflow orchestration and execution platforms, Airflow, Temporal, Prefect, observability patterns, etcd, Apache Zookeeper, AI products, Cursor, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz, and powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584750005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>aebaacf5-640</externalid>
      <Title>Integrations Engineer</Title>
      <Description><![CDATA[<p>You will own the full lifecycle of integrations that power Hebbia&#39;s AI , from designing connectors to deploying them in production, monitoring their behavior, and debugging failures in real time.</p>
<p>You&#39;ll work across systems like Snowflake, S3, SharePoint, and internal customer infrastructure , building pipelines that need to handle real-world complexity: unreliable APIs, evolving schemas, massive datasets, and edge cases that don’t show up in documentation.</p>
<p>This role is hands-on, high-ownership, and deeply technical. You won’t just write code , you’ll develop the instincts to operate and debug complex distributed systems in production.</p>
<p>You will build connectors and ingestion pipelines that bring enterprise data into Hebbia&#39;s AI platform, from Snowflake warehouses and SharePoint libraries to live pricing feeds, high-velocity news data, and proprietary customer systems.</p>
<p>You will design and operate pipelines that handle scale, failures, and edge cases gracefully.</p>
<p>You will debug issues across APIs, auth systems, and data formats, often under real-time customer pressure.</p>
<p>You will own reliability end-to-end: monitoring, alerting, on-call, and incident response.</p>
<p>You will improve internal tooling and observability to make systems more robust and easier to operate.</p>
<p>You will partner with product and customer teams to scope, prioritize, and ship the integrations that unlock Hebbia&#39;s highest-value use cases.</p>
<p>You will design and ship agents that sit on top of the ingestion layer, making enterprise data accessible and actionable across all of Hebbia&#39;s product surfaces , from document analysis to structured query workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $265,000</Salaryrange>
      <Skills>Python, APIs, OAuth flows, webhook patterns, rate limiting, pagination, cloud infrastructure, AWS, Kafka, PostgreSQL, Redis, ElasticSearch, enterprise data platforms, document processing pipelines, content extraction systems, agentic systems, LLM-enabled products, AI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4675784005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1e388b24-397</externalid>
      <Title>Backend Engineer, Growth and Data</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Backend Software Engineer to join our Growth and Data team. As a key member of our team, you will build and maintain powerful backend systems that drive user engagement and fuel our continued expansion. Your role involves architecting and implementing robust APIs, services, and infrastructure that empower customers with tailored, high-value experiences.</p>
<p>Your responsibilities will include owning critical system components, unlocking O(1) universal indexing, driving performance optimization, and mentoring and guiding junior engineers. You will also collaborate closely with product teams, designers, and frontend engineers to take ownership of core backend features from initial design through deployment.</p>
<p>To succeed in this role, you will need a Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field, and 5+ years of software development experience at a venture-backed startup or top technology firm. You should be proficient in building backend and API systems using technologies such as Python, Java, or Go, and have extensive experience with cloud platforms (e.g., AWS).</p>
<p>You will also need working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis, and the ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, Cloud platforms (e.g., AWS), Kafka, ElasticSearch, PostgreSQL, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584761005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b26de846-225</externalid>
      <Title>Backend Engineer, Agent Collaboration Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Backend Engineer to join our Agent Collaboration platform team. As a Backend Engineer at Hebbia, you will blend expertise in systems, application layer software, and data modeling to build highly efficient software solutions. You will be responsible for leveraging the latest software and agentic solutions and integrating product experience with powerful distributed systems. Your key responsibilities will include:</p>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on backend software engineering.</li>
<li>Proficiency in building backend and API systems using technologies such as Python, Java, or Go.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams.</li>
<li>Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Embraces rapid prototyping with an emphasis on user feedback.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<p>As a bonus, experience building agentic systems or LLM-enabled products, frequent use of AI products, especially during the development lifecycle, will be highly valued.</p>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate&#39;s experience and qualifications.</p>
<p>At Hebbia, we offer a range of benefits, including:</p>
<ul>
<li>Unlimited PTO</li>
<li>Medical, dental, and vision insurance</li>
<li>401K plan</li>
<li>Catered lunch daily</li>
<li>DoorDash dinner credit if you ever need to stay late</li>
<li>3 months non-birthing parent leave, 4 months for birthing parent</li>
<li>$15k lifetime fertility benefit</li>
<li>Competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, Cloud platforms (e.g., AWS), Kafka, ElasticSearch, PostgreSQL, Redis, Agentic systems, LLM-enabled products, AI products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for top asset managers and manages over $30 trillion in assets globally.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584764005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5242ca9a-088</externalid>
      <Title>Staff Automation Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Automation Engineer to have a huge impact on the Business Systems, Security, Production Engineering and IT functions. This role is for a seasoned engineer who thrives on solving complex operational challenges, enhancing system security and stability, and improving efficiency through automation and best practices using AI technologies.</p>
<p>Your day-to-day will involve implementing Agentic AI and LLM-powered workflows using tools like Tines, AWS Agentcore, AWS Bedrock, Claude Code, etc. You will deploy systems with Infrastructure as Code (IaC) (i.e. Terraform) and build and maintain automation workflows across key enterprise platforms (i.e. Atlassian, Okta, Google Workspace, Slack, Zoom, knowledge management systems), cybersecurity systems (i.e. SIEM, GRC platforms, Data Security Platforms, etc.), and cloud environments (AWS, GCP).</p>
<p>You will build AI-driven chatbots or intelligent agents that automate tasks, support conversational workflows, and integrate with enterprise applications. You will partner with IT, Security, GRC, Procurement, and business teams to automate operational tasks and processes to reduce toil, improve efficiency and enable business.</p>
<p>You will develop integrations using REST APIs, JSON, webhooks, and scripting languages (JavaScript, Python). You will follow established automation and AI standards for quality, security, and governance; provide improvements where appropriate.</p>
<p>You will troubleshoot, maintain, and optimize existing workflows to improve stability and performance. You will document designs, workflows, configurations, and operational procedures.</p>
<p>You will participate in code reviews, technical discussions, and team-based learning to uplift engineering quality and consistency.</p>
<p>You will work with various tooling in Security, IT, and Production Engineering.</p>
<p>This role requires 10+ years of experience in automation engineering, systems integration, or workflow development. You should have experience with automation platforms such as Tines, Retool, Superblocks, n8n, etc. You should also have hands-on experience with Terraform and containerization technologies.</p>
<p>You should have experience developing LLM-powered automations, conversational interfaces, or Agentic AI assistants. You should have knowledge of Git and modern version control practices.</p>
<p>You should have strong skills in REST APIs, JSON, webhooks, JavaScript, and Python. You should also have familiarity with identity systems (Okta, SCIM) and RBAC concepts.</p>
<p>You should have familiarity with cloud environments such as Google Cloud Platform (GCP) and Amazon Web Services (AWS).</p>
<p>You should be able to break down problems, collaborate cross-functionally, and deliver solutions with moderate guidance.</p>
<p>You should have strong communication skills and the ability to translate functional requirements into technical outputs.</p>
<p>Preferred experience includes familiarity with data platform and database technologies (e.g., Snowflake, PostgreSQL, Cassandra, DynamoDB).</p>
<p>Work perks at Greenlight include medical, dental, vision, and HSA match, paid life insurance, AD&amp;D, and disability benefits, traditional 401k with company match, unlimited PTO, paid company holidays and pop-up bonus holidays, professional development stipends, mental health resources, 1:1 financial planners, fertility healthcare, 100% paid parental and caregiving leave, plus cleaning service and meals during your leave, flexible WFH, both remote and in-office opportunities, fully stocked kitchen, catered lunches, and occasional in-office happy hours, employee resource groups.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000</Salaryrange>
      <Skills>Agentic AI, LLM-powered workflows, Tines, AWS Agentcore, AWS Bedrock, Claude Code, Infrastructure as Code (IaC), Terraform, REST APIs, JSON, webhooks, JavaScript, Python, Git, modern version control practices, identity systems, RBAC concepts, cloud environments, Google Cloud Platform (GCP), Amazon Web Services (AWS), data platform and database technologies, Snowflake, PostgreSQL, Cassandra, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company providing a banking app for families. They serve over 6 million parents and kids.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/d85a9c34-4434-4f6d-8f01-bccb9521c036</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>87a942b2-3e0</externalid>
      <Title>Senior Software Engineer | Blockchain</Title>
      <Description><![CDATA[<p>About Daylight</p>
<p>Daylight is the decentralised energy company. We&#39;re building a protocol that converts distributed solar and storage systems into yield-bearing infrastructure. These assets are deployed behind the meter in American homes and businesses, generating electricity revenues that flow onchain as a single, liquid yield token.</p>
<p>Our team brings together energy and crypto veterans who believe in a future where communities power themselves. Backed by a16z, Framework, and other industry-leading investors, we&#39;re building the infrastructure to make that vision a reality.</p>
<p>The Role</p>
<p>As a Senior Software Engineer | Blockchain at Daylight, you&#39;ll play a crucial role in developing our Django APIs and backend services that support our mobile app and partner integrations as well as designing the interactions with our smart contracts onchain. You&#39;ll build functionality that helps homeowners monitor, control, and optimise their energy usage while participating in our decentralised energy network.</p>
<p>We&#39;re an early-stage startup, so this role goes beyond just writing code. You&#39;ll have significant input on architecture decisions, user experience, and product direction. As we scale, there will be opportunities to grow into a leadership or management position, mentoring junior developers and shaping our backend development strategy and onchain integrations.</p>
<p>This role is ideal for someone who is passionate about crypto and clean energy, values craftsmanship in their code, and thrives in dynamic startup environments where your work has a direct impact on the product&#39;s success and the company&#39;s mission.</p>
<p>Responsibilities</p>
<ul>
<li><p>Develop and maintain scalable API endpoints that support vendor/partner integrations</p>
</li>
<li><p>Collaborate with cross-functional teams to define, design, and ship innovative features</p>
</li>
<li><p>Continuously monitor performance and resolve bottlenecks and bugs across all endpoints</p>
</li>
<li><p>Champion code quality, automation, and comprehensive testing practices</p>
</li>
<li><p>Design and implement Solidity smart contracts that underpin Daylight&#39;s decentralised energy markets</p>
</li>
<li><p>Develop Daylight&#39;s decentralised data storage and validation layer using modern frameworks (e.g. Commonware, Snapchain L3)</p>
</li>
<li><p>Architect secure oracle flows to feed real-world energy data onchain</p>
</li>
<li><p>Contribute to architecture planning and drive technical design decisions that ensure scalability and security</p>
</li>
<li><p>Optimise the application stack for maximum speed and reliability</p>
</li>
<li><p>Embrace CI/CD methodologies and infrastructure-as-code practices to streamline deployments</p>
</li>
</ul>
<p>What we&#39;re looking for</p>
<ul>
<li>5+ years of experience developing APIs and backend services</li>
<li>2+ years of experience writing Solidity smart contracts</li>
<li>Strong understanding of API Development and best practices</li>
<li>Experience with Python, Django, PostgreSQL and Redis</li>
<li>Proficiency in writing clean, readable, and well-documented code</li>
<li>Experience with REST APIs, GraphQL, and network request handling</li>
<li>Familiarity with version control systems, particularly Git</li>
<li>Knowledge of backend performance profiling and optimisation techniques</li>
<li>Experience with testing and managing AWS infrastructure and Kubernetes</li>
<li>Familiarity with crypto and related infrastructure, specifically EVM</li>
<li>Ability to work in a fast-paced environment with changing requirements</li>
<li>Strong problem-solving skills and attention to detail</li>
<li>Familiarity with DeFi protocols, especially stablecoins, lending markets, liquid staking, or yield optimisation systems</li>
<li>Experience integrating offchain data via oracles</li>
</ul>
<p>What we offer</p>
<ul>
<li>Competitive salary and equity package</li>
<li>Opportunity to shape a category-defining product from the ground up</li>
<li>Comprehensive health, dental, and vision insurance</li>
<li>Wellhub membership</li>
<li>Monthly wellness stipend</li>
</ul>
<p>Daylight values diversity and welcomes applications from all qualified candidates. We&#39;re building technology that serves everyone, and we believe diverse perspectives make our team and our product stronger.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>API Development, Solidity smart contracts, Python, Django, PostgreSQL, Redis, REST APIs, GraphQL, network request handling, Git, backend performance profiling, AWS infrastructure, Kubernetes, EVM, DeFi protocols, stablecoins, lending markets, liquid staking, yield optimisation systems, offchain data via oracles</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Daylight</Employername>
      <Employerlogo>https://logos.yubhub.co/godaylight.com.png</Employerlogo>
      <Employerdescription>Daylight is a decentralised energy company building a protocol that converts distributed solar and storage systems into yield-bearing infrastructure.</Employerdescription>
      <Employerwebsite>https://www.godaylight.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/daylight/jobs/4574415008</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9c762bb5-ef4</externalid>
      <Title>Senior Backend Engineer</Title>
      <Description><![CDATA[<p>About Daylight</p>
<p>Daylight is a decentralized energy company building a protocol that converts distributed solar and storage systems into yield-bearing infrastructure.</p>
<p>As a Senior Backend Developer at Daylight, you&#39;ll play a crucial role in developing our Django APIs and backend services that support our mobile app and partner integrations. You&#39;ll build functionality that helps homeowners monitor, control, and optimize their energy usage while participating in our decentralized energy network.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build robust GraphQL queries and mutations that power our mobile app and are tailored for mobile performance</li>
<li>Architect, develop, and maintain scalable API endpoints that support vendor/partner integrations</li>
<li>Collaborate with cross-functional teams to define, design, and ship innovative features</li>
<li>Continuously monitor performance and resolve bottlenecks and bugs across all endpoints</li>
<li>Champion code quality, automation, and comprehensive testing practices</li>
</ul>
<p>крым &amp; Internal Tools Development</p>
<ul>
<li>Leverage and extend Django’s admin interface to build a feature-rich internal CRM and administration portal</li>
<li>Customize the Django admin site to meet evolving business and workflow needs</li>
<li>Work with stakeholders to refine internal processes and drive user-friendly administrative solutions</li>
</ul>
<p>Technical Architecture &amp; DevOps</p>
<ul>
<li>Contribute to architecture planning and drive technical design decisions that ensure scalability and security</li>
<li>Optimize the application stack for maximum speed and reliability</li>
<li>Embrace CI/CD methodologies and infrastructure-as-code practices to streamline deployments</li>
<li>Evaluate and integrate emerging technologies that enhance our backend ecosystem</li>
</ul>
<p>What we’re looking for</p>
<ul>
<li>3+ years of experience developing APIs and backend services</li>
<li>Strong understanding of API Development and best practices</li>
<li>Experience with Python, Django, PostgreSQL and Redis</li>
<li>Proficiency in writing clean, readable, and well-documented code</li>
<li>Experience with REST APIs, GraphQL, and network request handling</li>
<li>Familiarity with version control systems, particularly Git</li>
<li>Knowledge of backend performance profiling and optimization techniques</li>
<li>Experience with testing and managing AWS infrastructure and Kubernetes</li>
<li>Ability to work in a fast-paced environment with changing requirements</li>
<li>Strong problem-solving skills and attention to detail</li>
</ul>
<p>What we offer</p>
<ul>
<li>Competitive salary and equity package</li>
<li>Opportunity to shape a category-defining product from the ground up</li>
<li>Comprehensive health, dental, and vision insurance</li>
<li>Wellhub membership</li>
<li>Monthly wellness stipend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Django, PostgreSQL, Redis, REST APIs, GraphQL, Git, AWS infrastructure, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Daylight</Employername>
      <Employerlogo>https://logos.yubhub.co/godaylight.com.png</Employerlogo>
      <Employerdescription>Daylight is a decentralized energy company building a protocol that converts distributed solar and storage systems into yield-bearing infrastructure.</Employerdescription>
      <Employerwebsite>https://www.godaylight.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/daylight/jobs/4574421008</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2980529d-a4a</externalid>
      <Title>Member of Technical Staff, Trading (Derivatives)</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. You will be implementing front office and back office trading systems that are used by institutional investors around the world on a daily basis to trade, invest and exchange cryptocurrency assets.</p>
<p>The crypto industry is one of the most exciting industries in tech today, and it is constantly changing! At Anchorage Digital, we are building foundational technology to help the crypto industry evolve in a safe, regulated and highly secure manner, which we believe is essential for maximizing the potential of this exciting industry.</p>
<p>We think about growth and career development differently at Anchorage Digital. We define performance as acquiring, possessing, and practicing a relevant set of skills and competencies - and the ability to apply them effectively and consistently.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Participate in task breakdown, estimation, design, implementation and maintenance of trading systems</li>
<li>Use your knowledge of the trading system domain to influence the technical direction of the Brokerage team. Advocate for improved processes or technology when appropriate.</li>
<li>Dive deep into complex, ambiguous problems, formulate elegant and practical solutions</li>
<li>Review other developer&#39;s code to ensure consistency, reduce errors and share context across the Brokerage engineering team</li>
<li>Drive iterative system improvement by implementing robust instrumentation and metric collection.</li>
</ul>
<p><strong>Complexity and Impact of Work</strong></p>
<ul>
<li>Able to work either independently or as a lead engineer on a team to deliver features</li>
<li>Capable of breaking down large projects into smaller tasks, and accurately estimating the time and scope of projects. Articulate effectively the different options considered, analyze trade-offs, justify and recommend priorities.</li>
</ul>
<p><strong>Organizational Knowledge</strong></p>
<ul>
<li>Ensure that knowledge is shared across the team and does not position themselves or others as a single point of failure.</li>
<li>Collaborate cross-functionally with the Brokerage team and other teams at Anchorage Digital</li>
</ul>
<p><strong>Communication and Influence</strong></p>
<ul>
<li>Mentor and guide multiple engineers on the team within your area of specialization or domain, and help others understand the strategic goals of the Brokerage team and how your work relates to these</li>
<li>Understands others&#39; context or underlying needs, motivations, emotions or concerns and adjusts communication to ensure maximum impact and effectiveness</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of professional experience with Go</li>
<li>Experience with modern software development practices, including automated unit, integration and end to end testing, cloud development, database design and interaction using MySQL or PostgreSQL and experience using Git</li>
<li>Knowledge of financial asset trading systems and understanding of several (not all) of the following topics: Order Execution Management Systems (OEMS), FIX protocol, market data, low latency application architectures and messaging, matching engines, FX trading, OTC, derivatives, position/risk management, cryptocurrency trading, trading back office systems</li>
<li>Real world experience building financial trading systems</li>
<li>Passionate about good software engineering design and implementation practices (e.g. TDD, SOLID, refactoring, etc)</li>
<li>Genuinely care about code quality and test infrastructure</li>
<li>Believe software engineering is a team activity and enjoy collaborating every single day, learning from and mentoring others</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with GraphQL API design and implementation</li>
<li>Experience with gRPC API design and implementation</li>
<li>Background in the finance industry</li>
<li>Experience working with digital asset (i.e. cryptocurrency) trading</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Modern software development practices, Automated unit, integration and end to end testing, Cloud development, Database design and interaction using MySQL or PostgreSQL, Git, Financial asset trading systems, Order Execution Management Systems (OEMS), FIX protocol, Market data, Low latency application architectures and messaging, Matching engines, FX trading, OTC, Derivatives, Position/risk management, Cryptocurrency trading, Trading back office systems, GraphQL API design and implementation, gRPC API design and implementation, Background in the finance industry, Experience working with digital asset (i.e. cryptocurrency) trading</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/71ff4456-9a32-4404-8d83-f9552e3f1050</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>608305cb-5a6</externalid>
      <Title>Software Engineer (SWE I / SWE II)</Title>
      <Description><![CDATA[<p>We are looking for a Software Engineer to join our Lab Systems team. As a Software Engineer, you will work closely with engineers, product partners, and laboratory scientists to build and evolve internal software systems that support the design, build, and testing of therapeutic antibodies at scale.</p>
<p>This role is designed for engineers with several years of experience building and supporting production software who are excited to grow their technical scope and domain impact. Depending on experience and demonstrated impact, this role may be leveled as Software Engineer I (SWE I) or Software Engineer II (SWE II).</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement, test, and maintain features in BigHat&#39;s internally developed, cloud-based LIMS+ platform.</li>
<li>Work independently on well-scoped features and improvements, following work through implementation, testing, and release.</li>
<li>Collaborate closely with cross-functional partners (scientists, product owners, and other engineers) to translate real-world lab workflows into reliable software.</li>
<li>Participate actively in engineering ceremonies, technical discussions, and code reviews.</li>
<li>Own the quality and outcomes of your work, including debugging, test failures, and production issues.</li>
</ul>
<p>This role reports to the Lab Systems Lead and works closely with the Lab Systems Product Owner, with responsibilities that impact teams across BigHat.</p>
<p>About You:</p>
<ul>
<li>You have experience contributing to and owning work in a production software environment.</li>
<li>You are comfortable working independently on small to medium features and improvements.</li>
<li>You communicate clearly about progress, risks, and tradeoffs, and collaborate effectively with peers and partners.</li>
<li>You take ownership of your work and follow issues through to resolution.</li>
<li>You are curious and motivated by building software that supports real users doing complex work.</li>
</ul>
<p>Experience:</p>
<ul>
<li>3–5 years of professional software engineering experience building production systems OR 2+ years of professional software engineering experience with prior experience in biotech, life sciences, laboratory environments, or scientific software, where domain knowledge meaningfully accelerates impact.</li>
</ul>
<p>Relevant Tech / Skills:</p>
<ul>
<li>Experience with some (not necessarily all) of the following: TypeScript, React, Material-UI, Vega, Python 3, SQLAlchemy, RESTful API design, AWS (CDK, Lambda, Step Functions, ECS/Batch, Fargate, API Gateway, Athena), relational databases (e.g., PostgreSQL), Pandas, PyTorch or other ML frameworks (nice to have, not required).</li>
</ul>
<p>Benefits:</p>
<ul>
<li>The salary estimated for this position is $135,000 - $175,000 + bonus + options + benefits. Compensation will vary depending on job-related knowledge, skills, and experience. Actual compensation will be confirmed in writing at the time of the offer.</li>
<li>Range of health insurance plan options through Anthem and Kaiser (monthly credit if benefit waived)</li>
<li>Dental, and vision coverage through Guardian</li>
<li>Additional well-being benefits through Nayya, OneMedical, Wagmo, Rula, and more</li>
<li>401(k) with company match</li>
<li>DTO, two weeks of company-wide shutdown, and 12 company holidays</li>
<li>Paid parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>Hybrid</Workarrangement>
      <Salaryrange>$135,000 - $175,000 + bonus + options + benefits</Salaryrange>
      <Skills>TypeScript, React, Material-UI, Vega, Python 3, SQLAlchemy, RESTful API design, AWS (CDK, Lambda, Step Functions, ECS/Batch, Fargate, API Gateway, Athena), relational databases (e.g., PostgreSQL), Pandas, PyTorch or other ML frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Biotechnology</Industry>
      <Employername>Bighatbiosciences</Employername>
      <Employerlogo>https://logos.yubhub.co/bighat.bio.png</Employerlogo>
      <Employerdescription>BigHat Biosciences is a biotechnology company that develops and manufactures therapeutic antibodies.</Employerdescription>
      <Employerwebsite>https://bighat.bio/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://bighatbiosciences.pinpointhq.com/en/postings/9c33a0d3-782d-4e9e-9b3c-6609cb47f704</Applyto>
      <Location>San Mateo, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cbc0884f-89f</externalid>
      <Title>Sr. Staff Engineer (Cloud, Python, Go, LLM)</Title>
      <Description><![CDATA[<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>Join us to transform the future through continuous technological innovation. You are a visionary engineer with a passion for leveraging advanced technologies to solve complex challenges. You thrive in dynamic environments, consistently pushing boundaries to drive innovation. With over eight years of experience in distributed systems, enterprise software, and microservices, you possess deep technical expertise and a strong foundation in Python, Go, and modern cloud platforms.</p>
<p>Your knowledge of Kubernetes, containerization, and hybrid cloud architectures is complemented by a robust understanding of Linux systems and automation tools. You are skilled at collaborating across globally distributed teams, bringing clarity to technical discussions and architectural designs. You are self-driven, continuously seeking to learn and experiment with emerging technologies,including Generative AI and LLMs.</p>
<p>Your communication skills enable you to articulate ideas clearly and influence stakeholders, whether they are internal R&amp;D teams or external customers. You are motivated by opportunities to democratize AI, streamline development processes, and empower others with innovative solutions. Your curiosity and resilience drive you to prototype, test, and refine new concepts, ensuring Synopsys remains at the forefront of the industry.</p>
<p>Above all, you value inclusivity, teamwork, and the pursuit of excellence.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain scalable cloud services for R&amp;D teams to host Generative AI applications on leading cloud platforms.</li>
</ul>
<ul>
<li>Build and deliver cloud-native, containerized AI systems for on-premises customers, ensuring seamless integration and deployment.</li>
</ul>
<ul>
<li>Lead orchestration of GPU scheduling within Kubernetes ecosystems, utilizing tools like Nvidia GPU Operator and Multi-Instance GPU (MIG).</li>
</ul>
<ul>
<li>Architect reliable and cost-effective hybrid cloud solutions using cutting-edge technologies such as Docker, Kubernetes Cluster Federation, and Azure Arc.</li>
</ul>
<ul>
<li>Streamline onboarding processes for internal products and external customers, creating assets and artifacts that facilitate access to GenAI technologies.</li>
</ul>
<ul>
<li>Collaborate with external customers to understand their environments, constraints, and architectures, defining and integrating tailored platforms and products.</li>
</ul>
<ul>
<li>Prototype, experiment, and test newer technologies,including Generative AI, LLMs, and inference servers,to drive innovation within Synopsys.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>BS/MS in Computer Science, Software Engineering, or equivalent.</li>
</ul>
<ul>
<li>8+ years of experience in distributed systems, enterprise software, and microservices.</li>
</ul>
<ul>
<li>Expert proficiency in Python and Go programming languages.</li>
</ul>
<ul>
<li>Deep understanding of Kubernetes (on-premises and managed services like AKS/EKS/GKE).</li>
</ul>
<ul>
<li>Strong systems knowledge,Linux kernel, cgroups, namespaces, and Docker.</li>
</ul>
<ul>
<li>Experience with CI/CD automation, Infrastructure as Code (IaC), and cloud providers (AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Ability to design complex distributed systems and solve challenging problems efficiently.</li>
</ul>
<ul>
<li>Experience with RDBMS (PostgreSQL preferred) for handling large data sets.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<ul>
<li>Self-motivated with a continuous learning mindset.</li>
</ul>
<ul>
<li>Experience working with globally distributed teams.</li>
</ul>
<ul>
<li>Nice to have: Experience with Generative AI, LLMs, inference servers, and prototyping new technologies.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>Innovative problem-solver who thrives in ambiguity and complexity.</li>
</ul>
<ul>
<li>Collaborative team player, comfortable working with global and cross-functional teams.</li>
</ul>
<ul>
<li>Clear and effective communicator, able to articulate technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Resilient and adaptable, eager to learn and experiment with new technologies.</li>
</ul>
<ul>
<li>Inclusive and empathetic, valuing diverse perspectives and backgrounds.</li>
</ul>
<ul>
<li>Driven by curiosity, continuous improvement, and the pursuit of excellence.</li>
</ul>
<p><strong>The Team You’ll Be A Part Of</strong></p>
<p>You’ll join the Synopsys Platform Engineering team,an innovative, globally distributed group dedicated to transforming R&amp;D product development and deployment. Our team is passionate about leveraging cloud, containerization, and AI technologies to streamline workflows and accelerate innovation. We work collaboratively, experiment boldly, and support each other in delivering high-impact solutions that shape the future of electronic design automation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Kubernetes, containerization, hybrid cloud architectures, Linux systems, automation tools, CI/CD automation, Infrastructure as Code (IaC), cloud providers (AWS/GCP/Azure), RDBMS (PostgreSQL), Generative AI, LLMs, inference servers, prototyping new technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys develops and maintains software used in chip design, verification, and manufacturing.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/hyderabad/sr-staff-engineer-cloud-python-go-llm/44408/92664451936</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>09656702-eee</externalid>
      <Title>Senior Fullstack Engineer</Title>
      <Description><![CDATA[<p>The Subscriptions Mission builds and evolves the experiences and systems that allow millions of listeners to discover, subscribe to, and enjoy Spotify Premium.</p>
<p>Our teams focus on awareness, acquisition, activation, retention, and commerce , ensuring seamless experiences that help fans connect more deeply with the audio they love while enabling sustainable growth for Spotify.</p>
<p>As a Senior Fullstack Engineer, you’ll work on systems that process large volumes of transactions and support the global purchase experience. Your work will help evolve the platform that powers Spotify’s commerce ecosystem, ensuring our payment infrastructure remains scalable, reliable, and innovative as we grow.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and implement scalable backend services and frontend components that power Spotify’s commerce platform.</li>
</ul>
<ul>
<li>Design and develop critical capabilities in Java and Python services operating in a high-volume, low-latency environment.</li>
</ul>
<ul>
<li>Build and maintain the pay-in functionality that enables payment processing across Spotify products.</li>
</ul>
<ul>
<li>Run and analyze experiments to improve payment performance and user experience.</li>
</ul>
<ul>
<li>Improve system reliability through monitoring, alerting, and operational best practices.</li>
</ul>
<ul>
<li>Collaborate with engineers, product managers, and cross-functional teams to deliver impactful commerce features.</li>
</ul>
<ul>
<li>Contribute to evolving Spotify’s payment infrastructure and SDK capabilities for developers across the company.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>You are experienced with modern JavaScript development, testing, and debugging.</li>
</ul>
<ul>
<li>You have experience building and scaling backend services using Java and/or Python.</li>
</ul>
<ul>
<li>You have worked in cloud-native environments such as Google Cloud Platform or similar platforms.</li>
</ul>
<ul>
<li>You have experience designing and maintaining libraries, frameworks, or developer tools used by other engineers.</li>
</ul>
<ul>
<li>You have hands-on experience working with scalable databases such as PostgreSQL.</li>
</ul>
<ul>
<li>You approach engineering problems with strong analytical thinking and practical solutions.</li>
</ul>
<ul>
<li>You communicate clearly and collaborate well with cross-functional teams.</li>
</ul>
<ul>
<li>Experience working in e-commerce, financial systems, or high-volume transaction environments is a plus.</li>
</ul>
<p><strong>Where You&#39;ll Be</strong></p>
<p>This role is based in London or Stockholm.</p>
<p>We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>modern JavaScript development, Java, Python, cloud-native environments, scalable databases, PostgreSQL, analytical thinking, collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service with millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/0c364c03-4f52-4cbc-9dd1-8e2524d269ab</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>22fe5cb2-ba9</externalid>
      <Title>Engineering Manager, Datastores</Title>
      <Description><![CDATA[<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity.</p>
<p>This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers.</p>
<p>We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>
<p>We&#39;re looking for an Engineering Manager, Datastores to lead the team responsible for the reliability, scalability, and evolution of Webflow’s core production databases , primarily MongoDB and PostgreSQL. This team operates at the heart of our application and hosting stack, enabling product teams to ship confidently while maintaining high standards of performance, durability, security, and data residency.</p>
<p>Webflow’s product and hosting platform operates at a significant scale. The Datastores team sits at a critical boundary between application velocity and system durability. This is a high-leverage leadership role at the core of Webflow’s infrastructure strategy.</p>
<p><strong>About the role:</strong></p>
<ul>
<li>Lead and grow a team of Database engineers responsible for MongoDB and PostgreSQL in production.</li>
<li>Own the operational excellence of our database layer, including availability, durability, performance, cost efficiency, and data residency.</li>
<li>Drive roadmap and strategy for multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, and infrastructure automation (Pulumi/Terraform).</li>
<li>Partner with Product Engineering to guide new access patterns, review high-impact launches for database risk, and establish guardrails that enable velocity without compromising reliability.</li>
<li>Improve reliability through proactive failure-mode detection, clear SLOs, actionable alerting, and high-quality incident response and retrospectives.</li>
<li>Build self-service tooling and paved roads for migrations, connection management, indexing, and query best practices.</li>
<li>Mentor and grow senior and staff engineers while contributing to broader infrastructure strategy across AWS, Kubernetes, and stateful systems architecture.</li>
</ul>
<p><strong>About you:</strong></p>
<ul>
<li>BS / BA college degree or relevant experience</li>
<li>Business-level fluency to read, write and speak in English</li>
<li>2+ years of experience leading high-performing engineering teams.</li>
<li>6+ years of hands-on experience operating and scaling production databases (MongoDB and/or PostgreSQL preferred).</li>
<li>Experience running business-critical, high-throughput systems with strong availability and durability requirements.</li>
</ul>
<p>You’ll thrive in this role if you:</p>
<ul>
<li>Bring deep expertise in operating and scaling production databases (e.g., replication, failover, indexing, query planning, migrations) and have led teams supporting stateful, multi-region systems with strict uptime requirements.</li>
<li>Balance strong architectural judgment with pragmatism , evolving our datastore strategy while enabling product teams to ship quickly and safely.</li>
<li>Think in terms of SLOs, capacity models, and long-term architectural trade-offs, with hands-on experience in infrastructure as code (Pulumi/Terraform), Kubernetes, and AWS.</li>
<li>Bring strong systems-level thinking to performance and reliability, identifying root causes across application, database, and infrastructure layers and building preventative solutions.</li>
<li>Lead calmly through high-severity incidents, drive blameless postmortems and systemic improvements, and build strong cross-functional relationships grounded in craftsmanship and continuous improvement.</li>
<li>Stay curious and open to growth-Demonstrate a proactive embrace of AI, actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>
</ul>
<p><strong>Our Core Behaviors:</strong></p>
<ul>
<li>Build lasting customer trust.</li>
<li>Win together.</li>
<li>Reinvent ourselves.</li>
<li>Deliver with speed, quality, and craft.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Ownership in what you help build.</li>
<li>Health coverage that actually covers you.</li>
<li>Support for every stage of family life.</li>
<li>Time off that’s actually off.</li>
<li>Wellness for the whole you.</li>
<li>Invest in your future.</li>
<li>Monthly stipends that flex with your life.</li>
<li>Bonus for building together.</li>
</ul>
<p><strong>Be you, with us:</strong></p>
<p>At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database engineering, MongoDB, PostgreSQL, infrastructure automation, Pulumi/Terraform, Kubernetes, AWS, leadership, team management, operational excellence, availability, durability, performance, cost efficiency, data residency, multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, self-service tooling, paved roads, migrations, connection management, query best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Webflow</Employername>
      <Employerlogo>https://logos.yubhub.co/webflow.com.png</Employerlogo>
      <Employerdescription>Webflow is a privately held company that builds a Digital Experience Platform.</Employerdescription>
      <Employerwebsite>https://webflow.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/webflow/jobs/7648674</Applyto>
      <Location>Argentina Remote</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>5b5929ae-868</externalid>
      <Title>Senior Software Engineer - Database Platform - Engine by Starling</Title>
      <Description><![CDATA[<p>At Engine by Starling, we&#39;re seeking a Senior Software Engineer to join our Cross Cutting Engineering team. As a Senior Software Engineer, you will play a crucial role in building and maintaining the reliable, scalable, and maintainable infrastructure and tooling that powers our entire software delivery pipeline.</p>
<p>Our mission is to build the software layer that makes the &#39;human-in-the-loop&#39; obsolete and empower our technology teams to operate their own databases. We&#39;re forming a new team to lead a multi-year roadmap focused on the development and evolution of two critical proprietary products:</p>
<ul>
<li>Database Manager: Our central orchestration platform and control plane.</li>
<li>Replication Manager: Our bespoke logical replication service.</li>
</ul>
<p>Your goal is to ensure that, as we onboard more global clients, our database infrastructure remains stable, resilient, and autonomous.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Scale for Multi-Tenancy: Design and develop Java-based services within Database Manager to manage an ever-growing number of isolated database estates for our SaaS clients.</li>
<li>Evolve Replication Manager: Enhance our data streaming orchestration to ensure &#39;Zero-Downtime&#39; transitions and migrations are seamless across a global footprint.</li>
<li>Architect Cross-Cloud Portability: Work with cloud native solutions to build a database layer that is cloud-agnostic, allowing Engine to deploy reliably across different providers.</li>
<li>Eliminate Manual Toil: Build high-level abstractions for complex maintenance tasks, ensuring the system proactively heals and maintains itself.</li>
<li>Execute a Multi-Year Roadmap: Contribute to the long-term technical strategy of how Engine handles mission-critical data at a global scale.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>A Software Engineer First: You have deep expertise in Java working with JDBC, and enjoy building robust, testable, and maintainable backend services.</li>
<li>Distributed Systems Enthusiast: You are excited by the challenge of &#39;multi-everything&#39;- multi-tenant, multi-region, and multi-cloud.</li>
<li>PostgreSQL &amp; Kubernetes Interest: You understand (or want to learn) the internals of Postgres and how to run it natively on Kubernetes.</li>
<li>Systems Thinker: You have a natural &#39;reluctance for manual implementation&#39; and believe that infrastructure should be managed via code and APIs.</li>
<li>A Security Mindset: Security is paramount when it comes to the storage and handling of data - we do not allow DBAs or engineers access to production data.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>33 days holiday (including public holidays, which you can take when it works best for you)</li>
<li>An extra day&#39;s holiday for your birthday</li>
<li>Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off</li>
<li>16 hours paid volunteering time a year</li>
<li>Salary sacrifice, company enhanced pension scheme</li>
<li>Life insurance at 4x your salary &amp; group income protection</li>
<li>Private Medical Insurance with VitalityHealth including mental health support and cancer care.</li>
<li>Partner benefits include discounts with Waitrose, Mr&amp;Mrs Smith and Peloton</li>
<li>Generous family-friendly policies</li>
<li>Incentives refer a friend scheme</li>
<li>Perkbox membership giving access to retail discounts, a wellness platform for physical and mental health, and weekly free and boosted perks</li>
<li>Access to initiatives like Cycle to Work, Salary Sacrificed Gym partnerships and Electric Vehicle (EV) leasing</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, JDBC, PostgreSQL, Kubernetes, Distributed Systems, Cloud Native Solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Engine by Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Engine by Starling is a software-as-a-service (SaaS) business that provides technology to banks and financial institutions worldwide.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3F018DA316</Applyto>
      <Location>Manchester</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>cb592721-c78</externalid>
      <Title>Associate DevOps Engineer</Title>
      <Description><![CDATA[<p><strong>Associate DevOps Engineer991</strong></p>
<p><strong>What we&#39;re all about.</strong></p>
<p>Do you ever have the urge to do things better than the last time? We do. And it&#39;s this urge that drives us every day. Our environment of discovery and innovation means we&#39;re able to create deep and valuable relationships with our clients to create real change for them and their industries. It&#39;s what got us here – and it&#39;s what will make our future. At Quantexa, you&#39;ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions. 41% of our colleagues come from an ethnic or religious minority background. We speak over 20 languages across our 47 nationalities, creating a sense of belonging for all.</p>
<p><strong>We&#39;re heading in one direction, the future. We&#39;d love you to join us.</strong></p>
<p>At Quantexa we believe that people and organisations make better decisions when those decisions are put in context – we call this Contextual Decision Intelligence. Contextual Decision Intelligence is the new approach to data analysis that shows the relationships between people, places and organisations - all in one place - so you gain the context you need to make more accurate decisions, faster.</p>
<p><strong>What will you be doing?</strong></p>
<p>You&#39;ll be joining one of our DevOps teams in our R&amp;D department working on the Quantexa Cloud Platform and accompanying solutions. The platform is comprised of a landscape of low-maintenance, on-demand, and highly secure environments. Our environments host our software for our customers and partners to use, they also service a variety of internal use cases including underpinning the work of our R&amp;D teams to develop Quantexa Platform software.</p>
<p>You&#39;ll be heavily involved with our cloud-based technical infrastructure, with responsibilities surrounding improving the availability and resilience of our platform, improving its usability and security, ensuring we stay at the forefront of technical innovation, and reducing toil across our estate.</p>
<p>You will also work alongside our software engineering teams to leverage DevOps techniques to support our software release activities and work on unique cloud-based product offerings for our customers to use in their own DevOps processes on their own Cloud estate.</p>
<p><strong>Our tech stack</strong></p>
<ul>
<li>A strong focus on Kubernetes &amp; GitOps, utilising tools like ArgoCD and Istio</li>
<li>Infrastructure Management - CasC, IasC (Terraform, Docker, Ansible, Packer)</li>
<li>Hybrid public Cloud, primarily GCP &amp; Azure, but also some AWS</li>
<li>DevOps tooling/automation with the best tool for the job, commonly Bash, Python, Groovy, Golang</li>
<li>Provisioning stack includes Elasticsearch, Spark, PostgreSQL, Valkey, Airflow, Kafka, etcd</li>
<li>Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>We are looking for candidates who:</strong></p>
<ul>
<li>Take pride in designing, building and delivering high quality well engineered solutions to complex problems</li>
<li>Take a big picture approach to solving problems, taking care to ensure that the solution works well within the wider system</li>
<li>Commercial or non-commercial experience with programming/scripting/automation</li>
<li>Good appreciation for information security principals</li>
</ul>
<p><strong>Experience in the following would be beneficial:</strong></p>
<ul>
<li>Experience with infrastructure management and general Linux administration</li>
<li>Experience with software build and release engineering</li>
<li>Exposure to a handful of the key parts of our tech stack listed above</li>
</ul>
<p><strong>Benefits</strong></p>
<p><strong>Why join Quantexa?</strong></p>
<p>Our perks and quirks.</p>
<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>
<p>We offer:</p>
<ul>
<li>Competitive salary and Company Bonus</li>
<li>Flexible working hours in a hybrid workplace &amp; free access to global WeWork locations &amp; events</li>
<li>Pension Scheme with a company contribution of 6% (if you contribute 3%)</li>
<li>25 days annual leave (with the option to buy up to 5 days) + birthday off!</li>
<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>
<li>Family: Enhanced Maternity, Paternity, Adoption, or Shared Parental Leave</li>
<li>Private Healthcare with AXA</li>
<li>EAP, Well-being Days, Gym Discounts</li>
<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>
<li>Workplace Nursery Scheme</li>
<li>Team&#39;s Social Budget &amp; Company-wide Summer &amp; Winter Parties</li>
<li>Tech &amp; Cycle-to-Work Schemes</li>
<li>Volunteer Day off</li>
<li>Dog-friendly Offices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, GitOps, ArgoCD, Istio, Infrastructure Management, CasC, IasC, Terraform, Docker, Ansible, Packer, Hybrid public Cloud, GCP, Azure, AWS, DevOps tooling/automation, Bash, Python, Groovy, Golang, Elasticsearch, Spark, PostgreSQL, Valkey, Airflow, Kafka, etcd, Fluentd, Prometheus, Grafana, Alertmanager</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Quantexa is a software company providing Contextual Decision Intelligence, helping organisations make better decisions by showing the relationships between people, places and organisations.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/imLeMwxTKuwvDpxHC2mvRB/hybrid-associate-devops-engineer-in-london-at-quantexa</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>60767ed3-a21</externalid>
      <Title>PEGA CLM Developer</Title>
      <Description><![CDATA[<p>Capgemini is seeking an experienced Pega Developer with strong experience in Pega KYC / CLM frameworks. The ideal candidate will possess a solid engineering foundation, strong analytical ability, and hands-on experience in designing, developing, and deploying scalable enterprise applications.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Analyze complex technical challenges and propose effective, scalable solutions.</li>
<li>Design and implement software systems using OOA/OOD principles and industry-standard architectural practices.</li>
<li>Deliver high-quality application components using Java, J2EE frameworks, APIs, and microservices architecture.</li>
<li>Lead or participate in projects following Agile or Waterfall methodologies.</li>
<li>Apply best practices for source control management, branching strategy, and versioning.</li>
<li>Develop high-performance, maintainable code following established coding standards and engineering practices.</li>
<li>Design and build BPM workflows, task management processes, and rule-based systems.</li>
<li>Work hands-on with Pega PRPC, including case management, workflows, rules configuration, and integrations.</li>
<li>Develop SQL scripts, procedures, and database logic using PL/SQL and PostgreSQL.</li>
<li>Build and deploy containerized applications using OpenShift Container Platform (OCP) and Kubernetes.</li>
<li>Collaborate with cross-functional teams to deliver high-quality enterprise-grade applications.</li>
<li>Ensure robust integration between Pega components and Java-based microservices.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Pega System Architect certification (SSA preferred).</li>
<li>Strong hands-on experience in Pega PRPC.</li>
<li>Experience with Pega KYC / CLM frameworks.</li>
<li>Familiarity with Pega Infinity platform and its modernized capabilities.</li>
</ul>
<p>Benefits:</p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega System Architect certification, Pega PRPC, Pega KYC / CLM frameworks, Pega Infinity platform, Java, J2EE frameworks, APIs, microservices architecture, OOA/OOD principles, industry-standard architectural practices, source control management, branching strategy, versioning, PL/SQL, PostgreSQL, OpenShift Container Platform (OCP), Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/iRtQrHoZHEq2chDdZuuoxD/hybrid-pega-clm-developer-in-bengaluru-at-capgemini</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c0ae18ab-473</externalid>
      <Title>PEGA CLM Developer</Title>
      <Description><![CDATA[<p>Capgemini is seeking an experienced Pega Developer with strong experience in Pega KYC / CLM frameworks. The ideal candidate will possess a solid engineering foundation, strong analytical ability, and hands-on experience in designing, developing, and deploying scalable enterprise applications.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Analyze complex technical challenges and propose effective, scalable solutions.</li>
<li>Design and implement software systems using OOA/OOD principles and industry-standard architectural practices.</li>
<li>Deliver high-quality application components using Java, J2EE frameworks, APIs, and microservices architecture.</li>
<li>Lead or participate in projects following Agile or Waterfall methodologies.</li>
<li>Apply best practices for source control management, branching strategy, and versioning.</li>
<li>Develop high-performance, maintainable code following established coding standards and engineering practices.</li>
<li>Design and build BPM workflows, task management processes, and rule-based systems.</li>
<li>Work hands-on with Pega PRPC, including case management, workflows, rules configuration, and integrations.</li>
<li>Develop SQL scripts, procedures, and database logic using PL/SQL and PostgreSQL.</li>
<li>Build and deploy containerized applications using OpenShift Container Platform (OCP) and Kubernetes.</li>
<li>Collaborate with cross-functional teams to deliver high-quality enterprise-grade applications.</li>
<li>Ensure robust integration between Pega components and Java-based microservices.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Pega System Architect certification (SSA preferred).</li>
<li>Strong hands-on experience in Pega PRPC.</li>
<li>Experience with Pega KYC / CLM frameworks.</li>
<li>Familiarity with Pega Infinity platform and its modernized capabilities.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive compensation and benefits package:</li>
</ul>
<p>+ Competitive salary and performance-based bonuses 	+ Comprehensive benefits package 	+ Career development and training opportunities 	+ Flexible work arrangements (remote and/or office-based) 	+ Dynamic and inclusive work culture within a globally renowned group 	+ Private Health Insurance 	+ Pension Plan 	+ Paid Time Off 	+ Training &amp; Development</p>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega System Architect certification, Pega PRPC, Pega KYC / CLM frameworks, Pega Infinity platform, Java, J2EE frameworks, APIs, microservices architecture, OOA/OOD principles, industry-standard architectural practices, source control management, branching strategy, versioning, PL/SQL, PostgreSQL, OpenShift Container Platform (OCP), Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/xx7ytYPM94Rw69yZBrG9rW/hybrid-pega-clm-developer-in-chennai-at-capgemini</Applyto>
      <Location>Chennai, Tamil Nadu, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>33d62f7b-fe0</externalid>
      <Title>Appian Developer</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Appian Developer with 6+ years of strong technical experience in Appian development, database technologies, cloud integrations, and DevOps practices. The ideal candidate should be capable of designing scalable solutions, supporting applications at platform and application levels, and working effectively in a global, multi-cultural environment.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, develop, and implement Appian applications using Appian SAIL, integrations, and best practices.</li>
<li>Write, optimize, and debug database queries and design scalable database solutions.</li>
<li>Develop and test APIs for seamless system integration.</li>
<li>Implement and maintain CI/CD pipelines following DevOps methodologies.</li>
<li>Integrate Appian applications with GCP cloud services using APIs and other integration approaches.</li>
<li>Work with Oracle SQL, PostgreSQL, MariaDB, and other database technologies to develop robust solutions.</li>
<li>Contribute to cloud adoption initiatives involving GCP or AWS.</li>
<li>Support applications at both platform and application levels.</li>
<li>Design and develop integrations with third-party systems.</li>
<li>Work collaboratively in Agile Scrum teams; utilize tools like JIRA for tracking and delivery.</li>
<li>Provide technical guidance and mentorship to junior developers.</li>
<li>Collaborate with stakeholders across global teams with strong communication, documentation, and presentation skills.</li>
<li>Use common shell commands and scripting for automation or troubleshooting.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>6+ years of strong technical experience with Appian Development.</li>
<li>Hands-on experience with SAIL, SQL, Appian Integrations, Mule APIs, and related tools.</li>
<li>Strong experience with Oracle SQL, PostgreSQL, MariaDB databases.</li>
<li>Knowledge or eagerness to self-learn BPM tools like Appian.</li>
<li>Experience with API development and testing.</li>
<li>Experience integrating systems with GCP Cloud services.</li>
<li>Knowledge of cloud technologies such as GCP / AWS (services, databases, integration patterns).</li>
<li>Experience across different Java platforms.</li>
<li>Familiarity with DevOps CI/CD pipelines and tools.</li>
<li>Strong understanding of Agile Scrum methodology and tools like JIRA.</li>
<li>Strong analytical, communication, and stakeholder management skills.</li>
<li>Ability to work in a multi-cultural, global team environment.</li>
<li>Ability to work independently, handle pressure, and balance multiple priorities.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Appian Development, SAIL, SQL, Appian Integrations, Mule APIs, Oracle SQL, PostgreSQL, MariaDB, API development, GCP Cloud services, AWS, Java platforms, DevOps CI/CD pipelines, Agile Scrum methodology, JIRA</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/4xoriYun4AovgF1qY6KFkV/hybrid-appian-developer-in-hyderabad-at-capgemini</Applyto>
      <Location>Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>