<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>59421d7b-b28</externalid>
      <Title>Full Stack Engineer - Real-Time Trading</Title>
      <Description><![CDATA[<p>We are seeking a Full Stack Engineer to join our EQ Real-Time P&amp;L &amp; Risk team. This team is responsible for designing, developing, and supporting technology platforms that enable our businesses to view, evaluate, hedge, and trade live positions, P&amp;L, and risk.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with application development teams, technology management, and the business to design, prototype, and implement next-generation web UIs and mobile apps.</li>
<li>Develop, maintain, and support existing Java Client UI used by a quarter of the firm.</li>
<li>Contribute to the application development and architecture of highly scalable real-time UIs.</li>
</ul>
<p>Qualifications, Skills, and Requirements:</p>
<ul>
<li>5+ years of full-stack development experience, preferably within a financial services firm supporting real-time UIs.</li>
<li>Expertise with Core Java and Spring.</li>
<li>Excellent grasp of data structures and algorithms and the ability to learn and adopt new technologies quickly.</li>
<li>Familiarity with database technologies – Advanced SQL, NoSQL, Time-series databases (KDB).</li>
<li>Experience with event-driven architecture using message bus and caching technologies like Solace, Kafka, Pulsar, Memcached, Redis.</li>
<li>Experience working with various monitoring tools like Datadog, ELK stack.</li>
<li>A strong interest in financial markets and a desire to work directly with investment professionals.</li>
<li>A good team player with a strong willingness to participate and help others.</li>
<li>Drive to learn and experiment.</li>
</ul>
<p>Nice-to-have:</p>
<ul>
<li>Proficiency with Angular UI is preferred; React will also be considered.</li>
<li>Familiarity with equities and equity derivatives within a real-time electronic trading environment is preferred.</li>
<li>Experience with KDB+ q or C/C++.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Core Java, Spring, Advanced SQL, NoSQL, Time-series databases (KDB), Solace, Kafka, Pulsar, Memcached, Redis, Datadog, ELK stack, Angular UI, React, equities, equity derivatives, KDB+ q, C/C++</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology provider for the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954774219</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a90d311-fba</externalid>
      <Title>Full Stack Engineer - Equities Autocallables</Title>
      <Description><![CDATA[<p>This role is part of a global team responsible for enhancing and supporting a real-time trade capture platform that processes, normalizes, and enriches the firm&#39;s executions across multiple asset classes. The platform feeds executions into downstream systems including real-time P&amp;L, risk, and reporting.</p>
<p>The successful candidate will focus on a Private Credit buildout, with particular emphasis on equities and options, and on integrating with third-party platforms such as Murex and ION. They will design, develop, and maintain Java-based services that support a real-time trade capture platform for our autocallable buildout, and build and support Kafka-based streaming pipelines to process, normalize, and distribute trading and reference data to downstream systems.</p>
<p>Key responsibilities include collaborating closely with portfolio managers, traders, operations, and risk teams to understand requirements and translate them into robust technical solutions, contributing to the architecture and design of low-latency, high-availability components, including multithreaded and distributed systems, and monitoring, troubleshooting, and resolving production issues related to trading workflows, data integrity, and system performance.</p>
<p>We are looking for a highly skilled and experienced software engineer with a strong background in Java, Kafka, and front-end technologies using Typescript/Javascript, in this role you&#39;ll be using Angular. You should have a solid understanding of object-oriented design, design patterns, and multithreading in distributed systems, and hands-on experience with unit testing and integration testing frameworks and best practices.</p>
<p>In addition, you should be familiar with CI/CD pipeline (Jenkins) and DevOps tools/practices (e.g., Git, build tools, automated testing, deployment automation), experience with SQL databases such as Postgres and SQLServer, and comfort with modern IDEs and developer productivity tools; openness to using AI-assisted development tools and modern developer workflows.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Java, Kafka, Angular, Typescript, Postgres, SQLServer, Jenkins, Git, CI/CD pipeline, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global financial technology company that provides real-time trade capture platforms for various asset classes.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954367614</Applyto>
      <Location>Miami, Florida, United States of America · New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af78786b-a0a</externalid>
      <Title>Software Engineer - Compliance / Regulatory Reporting</Title>
      <Description><![CDATA[<p>The Compliance/Regulatory Reporting technology team at Millennium builds solutions to meet the firm&#39;s global regulatory and reporting obligations.</p>
<p>We use AI-assisted development tools (e.g., Claude Code), cloud-native/serverless architectures on AWS, and modern full-stack technologies (C#, Angular, SQL), with a strong focus on Domain-Driven Design (DDD) and automated testing.</p>
<p>The role is suited to engineers who have delivered real-time, mission-critical systems in high trade volume, distributed and fault-tolerant environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable, real-time Regulatory/Compliance applications using C#/.NET, Angular, and SQL, leveraging AI-assisted tools to accelerate development and improve quality.</li>
<li>Model business domains using DDD (bounded contexts, aggregates, entities, value objects, domain services, domain events) with a strong focus on business correctness and ubiquitous language.</li>
<li>Architect and implement cloud-native/serverless solutions on AWS, including:</li>
<li>Event-driven services using AWS Lambda and messaging/streaming (Kafka, SQS, SNS).</li>
<li>Containerized microservices using Docker and Kubernetes (e.g., Amazon EKS).</li>
<li>Build and maintain Angular front-ends that integrate securely and efficiently with backend APIs and domain services.</li>
<li>Design and optimize relational data models and SQL queries (SQL Server, Snowflake) for high-volume, low-latency workloads.</li>
<li>Drive a test-first mindset with strong automated test coverage (unit, integration, contract, and end-to-end) for critical domain workflows and controls.</li>
<li>Collaborate with global business and Compliance stakeholders to understand requirements, shape domain models, and deliver auditable, production-ready solutions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Core Engineering &amp; Full-Stack Skills</li>
</ul>
<p>+ Practical experience with AI-assisted tools (e.g., Claude Code, GitHub Copilot) for code generation/refactoring, test creation, debugging, and documentation   + Expert-level C#/.NET and strong object-oriented design skills   + Solid experience building Angular applications (components, state, routing, API integration)   + Advanced SQL skills for schema design and complex queries (SQL Server, Snowflake)   + Experience with high-throughput, concurrent/multithreaded systems   + Kafka or similar messaging experience, including using JSON and Avro for data contracts in streaming and messaging   + Strong understanding of unit testing, Dependency Injection, design patterns, concurrency, and SOLID principles   + Experience with Git and GitHub in a collaborative, code-review-driven workflow</p>
<ul>
<li>Soft Skills &amp; Domain Knowledge</li>
</ul>
<p>+ Excellent analytical and problem-solving abilities.   + Self-starter who thrives in a fast-paced, globally distributed environment.   + Strong written and verbal communication skills with the ability to explain domain models, testing strategies, and architectural decisions to varied audiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-assisted tools, C#/.NET, Angular, SQL, Domain-Driven Design, AWS, Kafka, Git, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology builds solutions to meet global regulatory and reporting obligations.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955321458</Applyto>
      <Location>Singapore, Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>107cbb3f-b6c</externalid>
      <Title>Production Support Engineer</Title>
      <Description><![CDATA[<p>The Production Support Engineer role is a hands-on, business-facing position that requires understanding how applications support the business, investigating functional and data-related issues, and communicating clearly with users under pressure.</p>
<p>The Core Technology Production Support team supports a suite of business-critical financial applications used by Middle Office, Operations, Treasury, and Trading. These platforms are central to the firm&#39;s PnL, risk, cash, trade processing, and regulatory reporting workflows.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>End to end ownership of the production environment</li>
<li>Infrastructure management</li>
<li>Release planning and deployment</li>
<li>Incident and problem management, including root cause analysis</li>
<li>Capacity Planning / BCP Testing</li>
<li>Build strong relationships with development and end-users/clients</li>
<li>Foster the DevOps culture</li>
<li>Focus on client service and delivery</li>
<li>Become the go-to person for your area of responsibility</li>
<li>Build subject matter expertise</li>
<li>Create and maintain high quality documentation and runbooks</li>
<li>Cross train other Support team members</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Electrical Engineering, or a related field.</li>
<li>Minimum 2+ years’ experience supporting an enterprise environment</li>
<li>Must have previous experience supporting business facing applications</li>
<li>Strong scripting skills in one of the following: Python (preferred), PowerShell, Perl, etc.</li>
<li>Excellent SQL skills and knowledge of various database systems</li>
<li>Must be able to run and understand complex queries</li>
<li>Ability to support both Windows and Unix/Linux environments</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience working in a trading environment</li>
<li>Exposure to the following:</li>
</ul>
<ul>
<li>CI/CD (Jenkins/Octopus/Artifactory)</li>
<li>Metrics/KPIs (Datadog/Influx/Tableau)</li>
<li>Kafka</li>
<li>Kubernetes</li>
<li>AI (MCP/Agents)</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, PowerShell, Perl, SQL, Windows, Unix/Linux, CI/CD, Metrics/KPIs, Kafka, Kubernetes, AI</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides financial applications used by Middle Office, Operations, Treasury, and Trading.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755943534669</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9bb11411-3a5</externalid>
      <Title>Full Stack Developer – Reference Data</Title>
      <Description><![CDATA[<p>We are seeking a skilled Full Stack Developer to enhance our Enterprise Reference Data platform, the central source of financial data across the firm.</p>
<p>The successful candidate will play a key role in evolving our data platform, services, and tools to meet new customer requirements. The platform is built on a modern tech stack, including Java, Kafka, and AWS (EKS), Angular, offering scalable and streaming capabilities.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop and maintain full-stack solution using Java (Spring Framework, GraphQL, Rest API, Kafka) and Angular.</li>
<li>Ensure proper ingestion, curation, storage, and management of data to meet business needs.</li>
<li>Write and execute unit, performance, and integration tests.</li>
<li>Collaborate with cross-functional teams to solve complex data challenges.</li>
<li>Closely work with users to gather the requirements and convert them into an actionable plan.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Minimum of 5-7 years&#39; of professional Java development experience, focusing on API-and Kafka based architectures.</li>
<li>Minimum 4-5 years of strong Angular development skills with backend integration expertise.</li>
<li>Hands-on experience with automated testing (unit, performance, integration).</li>
<li>5+ years of database development experience (any RDBMS)</li>
<li>Analytical and problem-solving skills with the ability to work independently in a fast-paced environment.</li>
<li>Excellent communication skills to effectively collaborate with users and other teams across different regions.</li>
<li>Self-motivated and capable of working under pressure.</li>
<li>Experience working in Financial Services or a Front Office Environment is highly preferred.</li>
<li>Experience working in Reference Data domain a plus</li>
<li>Familiarity with AI developer tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kafka, Angular, Spring Framework, GraphQL, Rest API, AWS (EKS), database development, automated testing, unit testing, performance testing, integration testing, AI developer tools, Financial Services, Front Office Environment, Reference Data domain</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955407188</Applyto>
      <Location>Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b09f8c4-a35</externalid>
      <Title>Salesforce Team Lead</Title>
      <Description><![CDATA[<p>We are seeking a Salesforce Team Lead to implement Salesforce for Millennium, focusing on Business Development and Business Management functions. These functions support the recruitment and support of Portfolio Managers. The ideal candidate will have strong technical skills in Salesforce, as well as a credible level of functional depth.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Develop and operate a recruiting platform for portfolio managers.</li>
<li>Participate in the design of various functions for Salesforce CRM solution.</li>
<li>Operate and further develop existing CRM and existing supporting products and applications.</li>
<li>Develop interfaces between CRM software and other systems internal and external to Millennium.</li>
<li>Lead the environment strategy as well as deployment strategy for the CRM solution.</li>
<li>Stay up to date with the latest Salesforce releases, features, and best practices.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>10+ years of experience in Salesforce sales cloud development/support.</li>
<li>4-7 years software development experience.</li>
<li>3+ years of C#/.NET with proficiency in a web framework such as ASP.NET MVC.</li>
<li>2+ years of WebUI/JavaScript/HTML/CSS with proficiency in at least one web framework such as Angular, React/Redux, or Ember.</li>
<li>In-depth knowledge of Salesforce Feedback Management, Salesforce Digital Experiences (Communities) and best practices.</li>
<li>Drive AI Innovation: Proactively research, propose, and prototype AI-driven solutions that enhance business processes, customer experience, or operational efficiency within Salesforce.</li>
<li>Track record of proposing or leading AI initiatives that resulted in measurable business impact.</li>
<li>Experience building and consuming RESTful services.</li>
<li>Experience building integration with third-party systems to a Salesforce CRM.</li>
<li>Able to lead the Salesforce deployments.</li>
<li>Design, customize and manage security entitlements for Salesforce sales cloud.</li>
<li>Development experience in Microsoft SQL server, building complex SQL and stored procedures.</li>
<li>Strong and effective interpersonal skills with proven ability to develop positive relationships with business partners.</li>
<li>Highly analytical with good problem solving skills; able to work independently in a fast-paced environment.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Salesforce sales cloud development/support, Software development experience, C#/.NET, WebUI/JavaScript/HTML/CSS, Salesforce Feedback Management, Salesforce Digital Experiences (Communities), RESTful services, Integration with third-party systems, Microsoft SQL server, Complex SQL and stored procedures, Integration experience with real time messaging platforms like Kafka, Experience working with front office business data and processes, Exposure to Agent force</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology provides risk technology solutions. The company&apos;s scale and size are not specified.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954905556</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f7aeee90-9b7</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>Join HSBC and help you stand out in your career. We offer opportunities, support and rewards that will take you further. As an Associate Director, Software Engineering, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, and exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should have strong proficiency in Java (Java 21 preferred), hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud), expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes), solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS), hands-on experience with CI/CD pipelines and DevOps practices, knowledge of database technologies (SQL and NoSQL), payment&#39;s domain experience and clearing scheme experience, excellent problem-solving and communication skills, hands-on experience in both SDLC and Agile methodologies, familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk), and certifications in Java or cloud technologies are a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices architecture, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662228</Applyto>
      <Location>Hyderabad, Telangana, India · Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aee9464f-897</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>We are currently seeking an experienced professional to join our team in the role of a Associate Director, Software Engineering.</p>
<p>In this role, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should meet the following requirements:</p>
<ul>
<li>Strong proficiency in Java (Java 21 preferred).</li>
<li>Hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud).</li>
<li>Expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes).</li>
<li>Solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS).</li>
<li>Hands-on experience with CI/CD pipelines and DevOps practices.</li>
<li>Knowledge of database technologies (SQL and NoSQL).</li>
<li>Payment&#39;s domain experience and clearing scheme experience.</li>
<li>Excellent problem-solving and communication skills.</li>
<li>Hands-on experience in both SDLC and Agile methodologies.</li>
<li>Familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk).</li>
<li>Certifications in Java or cloud technologies are a plus.</li>
</ul>
<p>You&#39;ll achieve more when you join HSBC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662222</Applyto>
      <Location>Bangalore, Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>467be5c4-940</externalid>
      <Title>Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer to join our Ads Engineering team. As a Machine Learning Engineer at Reddit, you will design and build production ML systems that power core experiences across the platform, including personalized recommendations, search, and ranking systems, intelligent advertising systems, and large-scale machine learning pipelines.</p>
<p>Our team works on high-impact systems that operate at internet scale and directly influence user experience, advertiser value, and business outcomes. You&#39;ll work on complex, real-world ML problems at massive scale, and contribute to technical strategy, architecture, and long-term ML roadmap.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and deploy production-grade machine learning models and systems at scale</li>
<li>Own the full ML lifecycle: from problem definition and feature engineering to training, evaluation, deployment, and monitoring</li>
<li>Build scalable data and model pipelines with strong reliability, observability, and automated retraining</li>
<li>Work with large-scale datasets to improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems</li>
<li>Partner cross-functionally with Product, Data Science, Infrastructure, and Engineering teams to translate complex problems into ML solutions</li>
<li>Improve system performance across latency, throughput, and model quality metrics</li>
<li>Research and apply state-of-the-art machine learning and AI techniques, including deep learning, graph &amp; transformers based, and LLM evaluation/alignment</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>3-5+ years of experience building, deploying, and operating machine learning systems in production</li>
<li>Strong programming skills in Python, Java, Go, or similar languages, with solid software engineering fundamentals</li>
<li>ML Fundamentals: a strong grasp of algorithms, from classic statistical learning (XGBoost, Random Forests, regressions) to DL architectures (Transformers, CNNs, GNNs)</li>
<li>Hands-on experience with modern ML frameworks (e.g., PyTorch, TensorFlow)</li>
<li>Experience designing scalable ML pipelines, data processing systems, and model serving infrastructure</li>
<li>Ability to work cross-functionally and translate ambiguous product or business problems into technical solutions</li>
<li>Experience improving measurable metrics through applied machine learning</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with recommender systems, search/ranking systems, advertising/auction systems, large-scale representation learning, or multimodal embedding systems</li>
<li>Familiarity with distributed systems and large-scale data processing (Spark, Kafka, Ray, Airflow, BigQuery, Redis, etc.)</li>
<li>Experience working with real-time systems and low-latency production environments</li>
<li>Background in feature engineering, model optimization, and production monitoring</li>
<li>Experience with LLM/Gen AI techniques, including but not limited to LLM evaluation, alignment, fine-tuning, knowledge distillation, RAG/agentic systems and productionizing LLM-powered products at scale</li>
<li>Advanced degree in Computer Science, Machine Learning, or related quantitative field</li>
</ul>
<p>Potential Teams:</p>
<ul>
<li>Ads Measurement Modeling</li>
<li>Ads Targeting and Retrieval</li>
<li>Advertiser Optimization</li>
<li>Ads Marketplace Quality</li>
<li>Ads Creative Effectiveness</li>
<li>Ads Foundational Representations</li>
<li>Ads Content Understanding</li>
<li>Ads Ranking</li>
<li>Feed Relevance</li>
<li>Search and Answers Relevance</li>
<li>ML Understanding</li>
<li>Notifications Relevance</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p>Pay Transparency:</p>
<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave.</p>
<p>To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below.</p>
<p>The base salary range for this position is: $185,800-$260,100 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$185,800-$260,100 USD</Salaryrange>
      <Skills>Python, Java, Go, PyTorch, TensorFlow, XGBoost, Random Forests, Regressions, Transformers, CNNs, GNNs, Spark, Kafka, Ray, Airflow, BigQuery, Redis, Recommender systems, Search/ranking systems, Advertising/auction systems, Large-scale representation learning, Multimodal embedding systems, Distributed systems, Large-scale data processing, Real-time systems, Low-latency production environments, Feature engineering, Model optimization, Production monitoring, LLM/Gen AI techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform that operates one of the internet&apos;s largest sources of information, with over 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7131932</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22bcbb50-ef4</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimisation skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>646a6426-386</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>We are seeking a talented Software Engineer to join our X Money team, focused on building a revolutionary global payment network that will serve over 600 million users and rival the world&#39;s largest financial institutions.</p>
<p>In this role, you will specialise in backend development, designing and optimising robust microservices to ensure scalability, security, and reliability. You will support full-stack efforts, collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and optimise microservices for high-concurrency transactions using Go, Postgres, and Kafka.</li>
<li>Collaborate on system architecture, testing, and monitoring to ensure uptime and performance.</li>
<li>Build internal tools using frontend technologies as needed to support operational efficiency.</li>
<li>Support the Technical Lead in risk mitigation and align with engineering, product, and compliance teams to drive project success.</li>
<li>Contribute to the development of secure, scalable systems for handling financial data and transactions.</li>
<li>Iterate rapidly on feedback to deliver high-quality solutions in a dynamic environment.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>5+ years of software engineering experience, with a strong focus on backend development.</li>
<li>Proficiency in Go or similar languages and experience with databases (e.g., Postgres) and streaming systems (e.g., Kafka).</li>
<li>Familiarity with building distributed systems for high-scale, low-latency environments.</li>
<li>Knowledge of handling secure financial data.</li>
<li>Ability to contribute to frontend development for internal tools when required.</li>
<li>Strong communication and problem-solving skills, with a collaborative mindset.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in fintech, payments, or regulatory frameworks (e.g., PCI-DSS, AML/KYC).</li>
<li>Prior work in a fast-paced, startup-like environment on greenfield projects.</li>
<li>Comfort navigating ambiguous requirements and iterating based on feedback.</li>
<li>Passion for leveraging AI to transform financial systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Postgres, Kafka, backend development, microservices, scalability, security, reliability, distributed systems, financial data, frontend development, fintech, payments, regulatory frameworks, PCI-DSS, AML/KYC, fast-paced environment, greenfield projects, AI transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5007310007</Applyto>
      <Location>Tokyo, JP</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65befd80-0e2</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Staff-level backend software engineer to join our Live Pay team. You&#39;ll work cross-functionally with various teams and contribute to the design and development of key platform services. This person must be strong in JVM languages and event-driven architecture on AWS.</p>
<p>The Canada base salary range for this full-time position is $252,000-$308,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. This role will be hybrid from our Vancouver, CAN office, with 2 days a week in the office required.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the design and implementation of new features. Break down complex problems into their bare essentials, translate this complexity into elegant design, and create high-quality, clean code.</li>
</ul>
<ul>
<li>Make a meaningful impact on the lives of our community members.</li>
</ul>
<ul>
<li>Design, develop, and deliver large-scale systems.</li>
</ul>
<ul>
<li>Collaborate and mentor other engineers while providing thoughtful guidance using code, design, and architecture reviews.</li>
</ul>
<ul>
<li>Contribute to defining technical direction, planning the roadmap, escalating issues, and synthesizing feedback to ensure team success.</li>
</ul>
<ul>
<li>Estimate and manage team project timelines and risks.</li>
</ul>
<ul>
<li>Care passionately about producing high-quality, efficient designs and code.</li>
</ul>
<ul>
<li>Constantly learning about new technologies and industry standards in software engineering.</li>
</ul>
<ul>
<li>Work cross-functionally with other teams, including: Analytics, design, product, marketing, and data science.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of development experience in backend software development</li>
</ul>
<ul>
<li>Bachelor&#39;s, Master’s, or PhD in computer science, computer engineering, or a related technical discipline, or equivalent industry experience.</li>
</ul>
<ul>
<li>Proficiency in at least one modern programming language, such as Java, Kotlin, Scala, or C#, and experience with at least one major framework such as Spring, Spring Boot, or ASP.NET Core.</li>
</ul>
<ul>
<li>Hands-on experience working in cloud environments: AWS, GCP, or Azure</li>
</ul>
<ul>
<li>Proficiency in event-driven systems such as Kafka, SQS, SNS, or Kinesis, and experience designing and operating scalable distributed systems.</li>
</ul>
<ul>
<li>Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations</li>
</ul>
<ul>
<li>Hands-on experience working with various databases. DynamoDB, MySQL, ElasticSearch</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools (e.g., Copilot, Cursor, LLMs) to improve engineering productivity</li>
</ul>
<ul>
<li>Experience with continuous integration and delivery tools, and experience in developing and executing functional and integration tests.</li>
</ul>
<ul>
<li>Familiarity with a clean architecture approach and software craftsmanship</li>
</ul>
<ul>
<li>Experience with Kubernetes and microservice architecture is a strong plus.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$252,000-$308,000</Salaryrange>
      <Skills>Java, Kotlin, Scala, C#, Spring, Spring Boot, ASP.NET Core, AWS, GCP, Azure, Kafka, SQS, SNS, Kinesis, DynamoDB, MySQL, ElasticSearch, AI-assisted development tools, Continuous integration and delivery tools, Clean architecture approach, Software craftsmanship, Kubernetes, Microservice architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, delivering real-time financial flexibility for individuals living paycheck to paycheck. It has a healthy core business with a significant runway.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7680387</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>abf4ca4a-26d</externalid>
      <Title>Senior Software Engineer - Safety Experience</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Safety Experience team. As a key member of this team, you will design, build, and maintain product features and systems that prevent harmful activities while ensuring regulatory compliance. Your work will play a critical role in keeping our users safe, which is essential for our growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the development of highly-visible, user-facing products that protect our users.</li>
<li>Design, build, and deploy robust production APIs, backend services, and data pipelines to launch safety features at scale.</li>
<li>Collaborate cross-functionally with Product, Design, Policy, Data Science, ML, Legal, and T&amp;S Operations to create solutions that are both impactful and lovable.</li>
<li>Iterate on in-house tooling to supercharge our T&amp;S workflows.</li>
<li>Respond rapidly to the ever-evolving abuse and compliance landscape.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years experience writing Python and utilizing back-end API frameworks (Flask, Django).</li>
<li>5+ years experience developing front-end interfaces with JavaScript (React, TypeScript) for both web and mobile platforms.</li>
<li>Familiarity with databases such as Cassandra, Postgres, and ScyllaDB.</li>
<li>Demonstrated success leading end-to-end delivery of complex projects: breaking down ambiguity, coordinating rollouts, and aligning stakeholders.</li>
<li>Demonstrated ability to troubleshoot, debug, and test complex systems in a live, production environment.</li>
<li>Exceptional communication and collaboration skills, with the ability to work well with cross-functional partners, designers, and other engineers.</li>
<li>Experience using metrics and dashboards to make data-driven decisions and develop insightful reports.</li>
<li>Experience utilizing AI tools like Claude Code and Cursor to supercharge dev workflows</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience in the Safety or Anti-Abuse domain.</li>
<li>Experience analyzing and visualizing data using Datadog or Mode.</li>
<li>Familiarity with real-time streaming systems like Kafka or Pub-Sub.</li>
<li>Ability to contribute to offline analytics jobs and processes.</li>
<li>Experience building and operating mobile-client features on iOS and Android.</li>
<li>Exposure to lower-level languages such as Go, Rust, and Elixir.</li>
<li>A strong moral compass that drives you to protect users and do the right thing.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Python, Flask, Django, JavaScript, React, TypeScript, Cassandra, Postgres, ScyllaDB, Claude Code, Cursor, Datadog, Mode, Kafka, Pub-Sub, Go, Rust, Elixir</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform used by over 200 million people each month for various purposes, primarily gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8377133002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed4bd662-c67</externalid>
      <Title>Senior Solutions Architect, Commercial - San Francisco</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>
<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>
<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>
<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>
<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>
<p>Must reside in San Francisco, CA</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000 - $250,000 OTE</Salaryrange>
      <Skills>Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that can simplify code, make applications more reliable, and help developers focus on the important things like delivering features faster. It is growing and building the team that will make that happen.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ceb4835-0f1</externalid>
      <Title>Manager, Professional Services</Title>
      <Description><![CDATA[<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>
<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>
<li>6+ years of experience working on Big Data Architectures independently.</li>
<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>
<li>Experience working on Databricks platform is a plus.</li>
<li>Documentation and white-boarding skills.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8503068002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8317ba42-502</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>
<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
</ul>
<ul>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
</ul>
<ul>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
</ul>
<ul>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
</ul>
<ul>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
</ul>
<ul>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
</ul>
<ul>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
</ul>
<ul>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
</ul>
<ul>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
</ul>
<ul>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>
</ul>
<ul>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
</ul>
<ul>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
</ul>
<ul>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
</ul>
<ul>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
</ul>
<ul>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
</ul>
<ul>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
</ul>
<ul>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8041698002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3d37bf3-6e8</externalid>
      <Title>Staff Software Engineer, Backend (Consumer- Retail Cash)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Staff Software Engineer to join our Consumer Cash team, which provides the foundational cash layer for Coinbase’s Consumer business.</p>
<p>As a Staff Engineer, you will be the technical anchor for Cash services, defining the architecture and roadmap for core cash capabilities.</p>
<p>You will be part of the vision to build a compelling and trusted single cash balance that serves Everything Exchange users’ risk-off needs.</p>
<p>This role is for an engineer who thrives on tackling complex, high-impact distributed systems that require high reliability and performance,especially in a trading and financial technology context.</p>
<p>Responsibilities:</p>
<ul>
<li>Serve as the technical leader and strategist for the Consumer Cash team, defining multi-quarter technical strategies that intersect multiple financial products.</li>
</ul>
<ul>
<li>Architect, develop, and own distributed systems that power low-latency APIs and event-driven pipelines that process large volumes of cash transactions with strong correctness guarantees.</li>
</ul>
<ul>
<li>Provide technical structure and partner closely with management and stakeholders to translate business goals into a defined strategic roadmap.</li>
</ul>
<ul>
<li>Design and implement foundational, high-performance infrastructure components, leveraging tools like Kafka and Clickhouse in an event-sourced architecture.</li>
</ul>
<ul>
<li>Manage individual project priorities, deadlines, and deliverables with strong technical expertise.</li>
</ul>
<ul>
<li>Mentor and coach other team members on advanced design techniques, coding standards, and best practices for building robust value-add products.</li>
</ul>
<ul>
<li>Leverage our modern, diverse tech stack to write high-quality, production-ready code that is thoroughly tested and delivers a critical product to market.</li>
</ul>
<p>What we look for in you:</p>
<ul>
<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>
</ul>
<ul>
<li>Demonstrated experience with low-latency, event-driven, or distributed systems.</li>
</ul>
<ul>
<li>A strong signal if you have a background in building consumer facing trading products or any application that handles large amounts of streaming data.</li>
</ul>
<ul>
<li>Passion for building an open financial system that brings the world together.</li>
</ul>
<ul>
<li>Intellectual curiosity, openness, and a passion for building a culture of positive energy and blameless truth-seeking.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience in payments, banking, wallets, or trading systems, especially transaction processing or ledgering.</li>
</ul>
<ul>
<li>Familiarity with the tech stack, including Golang, Clickhouse, Kafka, Redis, MongoDB.</li>
</ul>
<ul>
<li>Experience building financial, high reliability, or security systems.</li>
</ul>
<ul>
<li>Background in Blockchains (such as Bitcoin, Ethereum) or crypto-forward experience (e.g., interacting with Ethereum addresses, ENS, dApps).</li>
</ul>
<ul>
<li>Experience with a company going through rapid growth (from 10 to 100s of engineers).</li>
</ul>
<p>Job #: 75913</p>
<p>#LI-Remote</p>
<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$217,900-$217,900 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,900-$217,900 CAD</Salaryrange>
      <Skills>software engineering, distributed systems, low-latency APIs, event-driven pipelines, Kafka, Clickhouse, Golang, MongoDB, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service that allows users to buy, sell, and store cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7659458</Applyto>
      <Location>Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3bf626c-40c</externalid>
      <Title>Senior Software Engineer II</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a software company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p>Trusted by 65,000+ companies in 160+ countries, Carta’s platform of software and services lays the groundwork so you can build, invest, and scale with confidence.</p>
<p><strong>The Team You’ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We’re excited to meet people who are energized by complex, ambiguous problems.</p>
<p><strong>The Problems You’ll Solve</strong></p>
<p>As a Senior Software Engineer II, you will lead technically complex projects and serve as a multiplier for your team.</p>
<ul>
<li>Drive Implementation: Lead the execution of complex technical projects, driving them from concept to production while maintaining high standards for performance and reliability.</li>
</ul>
<ul>
<li>Simplify Systems: Dig deep into our architecture to identify opportunities to simplify code and infrastructure, prioritizing changes that have a measurable business impact.</li>
</ul>
<ul>
<li>Leverage Modern Tooling: Use the best AI-assisted engineering tools to accelerate your workflow, improve code quality, and spend more of your time solving the high-level logic and unconventional problems.</li>
</ul>
<ul>
<li>Foster Growth: Act as a mentor and coach, raising the technical bar for your peers through diligent PR reviews and architectural guidance.</li>
</ul>
<ul>
<li>Collaborate Cross-Functionally: Partner with product and design to ensure we are building the right solution for the user, not just following a specification.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You have experience with (or a desire to learn) our core technologies: Python, Django, React, Postgres, and Kafka. We also utilize Java, gRPC, and AWS.</li>
</ul>
<ul>
<li>Execution: You can break down complex user stories into actionable tasks and execute them with minimal guidance.</li>
</ul>
<ul>
<li>Strategic Mindset: You understand the &#39;why&#39; behind your code and can articulate technical trade-offs to stakeholders.</li>
</ul>
<ul>
<li>Experience: We recommend 8+ years of professional software engineering experience for this level.</li>
</ul>
<p><strong>Salary</strong></p>
<p>Carta’s compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions plans.</p>
<p>Our expected cash compensation (salary + commission if applicable) range for this role is:</p>
<ul>
<li>$181,050 - $213,000 CAD in Toronto, Ontario, Canada</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,050 - $213,000 CAD</Salaryrange>
      <Skills>Python, Django, React, Postgres, Kafka, Java, gRPC, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7656149003</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9be280f4-cbc</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>
<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>
<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>
<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a966b1bf-e76</externalid>
      <Title>Staff Software Engineer, Compute Infrastructure</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>
<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>
<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>
<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>
<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>
<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>
<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>
<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>
<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>
</ul>
<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4603505006</Applyto>
      <Location>Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>127228e6-c1a</externalid>
      <Title>Senior Support Engineer - Korean Speaking</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Support Engineer to join our Support team in South Korea. As a Senior Support Engineer, you will provide expert-level service to our APJ customers, ensuring technical customer issues are serviced within our contractual SLA and managed to resolution.</p>
<p>You will document and share your knowledge with the rest of the organization and our customers using Knowledge Centered-Services (KCS) methodology. You will also have a mindset of continuous improvement, in terms of efficiency of support processes and customer satisfaction.</p>
<p>To be successful in this role, you will need to work across multi-cultural and geographically distributed teams. You will have 3+ years of proven experience in Technical Support in a Software business, a technical background in fields like Information Technology, Network Engineering, Software Engineering, and a &#39;Customer First&#39; mindset.</p>
<p>You will be a team player, able to work in a fast-paced environment with a positive and adaptable approach. You will have knowledge of databases (SQL / No SQL) or search software technologies, experience with SaaS and/or Distributed systems, experience with Linux/Unix, experience with APIs, familiarity with Knowledge Centered-Services (KCS), and highly collaborative.</p>
<p>Native Korean language skills and professional working proficiency in English are required, as well as effective verbal and written communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Technical Support, Knowledge Centered-Services (KCS), Databases (SQL / No SQL), Search software technologies, SaaS and/or Distributed systems, Linux/Unix, APIs, Native Korean language skills, Professional working proficiency in English, Experience with administering and/or troubleshooting Elastic products in a production environment, Experience with Networking and/or Load Balancers, Experience with Kubernetes, Experience with Message Brokering (e.g. Kafka)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. Its Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7712961</Applyto>
      <Location>South Korea</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7be2f955-b0a</externalid>
      <Title>Machine Learning Intern</Title>
      <Description><![CDATA[<p>As a Fintech company where Machine Learning (ML) is a key driver of growth, our operations highly rely on machine learning models, from business decisions to customer experiences. We seek talented and motivated students and recent graduates with a strong background in machine learning, deep learning, language models, and generative AI, programming, and data analysis to join our 12-week Machine Learning Internship Program.</p>
<p>You will work on real-world projects, collaborate with experienced professionals, gain valuable experience in the fintech industry, and realise business and social impact. This role requires hybrid work from our Mountain View office, with 2 days a week in person. This internship will pay $40 per hour, with an expected 40 hours per week for the 12-week program.</p>
<p>Responsibilities:</p>
<ul>
<li>Train and fine-tune large-scale Foundation Models to support various fintech product use cases</li>
<li>Work with a large dataset, including structured and unstructured data</li>
<li>Help in ensuring improvements in our current ML systems via model, data, or experimentation upgrades</li>
<li>Gain hands-on experience with a wide array of technologies, including PyTorch, AWS, Kafka, Databricks, etc</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Actively pursuing a Master&#39;s or PhD in Computer Science, Information Technology, or a related field</li>
<li>Located in Mountain View, or have the ability to relocate there, for the duration of the internship</li>
<li>Strong understanding of statistical models, familiarity, and in-depth understanding of machine learning and deep learning algorithms. Familiarity with training or fine-tuning large-scale models, Sequence Transformer models</li>
<li>Interest in multimodal or multitask learning across structured, sequential, and behavioural data</li>
<li>Familiarity with AI tools, harness engineering, agentic workflow, etc.</li>
<li>Hands-on programming experience in Python and ML frameworks such as PyTorch</li>
<li>Equipped with good verbal and written communication skills</li>
<li>A background demonstrating strong problem-solving skills</li>
<li>Committed to taking ownership of projects, conducting thorough investigations, and driving initiatives to conclusion</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$40 per hour</Salaryrange>
      <Skills>machine learning, deep learning, language models, generative AI, programming, data analysis, PyTorch, AWS, Kafka, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn provides earned wage access to individuals with unique financial needs, allowing them to access their earnings as they earn them without mandatory fees, interest rates, or credit checks.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7770051</Applyto>
      <Location>Mountain View, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b4d74f5-cf9</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a software company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p><strong>The Team You&#39;ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We’re excited to meet people who are energized by complex, ambiguous problems. We look for owners and problem-solvers who are eager to dive into the details of their craft, and are motivated by building products and experiences that meaningfully expand access to ownership.</p>
<p><strong>The Problems You&#39;ll Solve</strong></p>
<p>As a Senior Software Engineer II, you will lead technically complex projects and serve as a multiplier for your team. You’ll work to:</p>
<ul>
<li>Drive Implementation: Lead the execution of complex technical projects, driving them from concept to production while maintaining high standards for performance and reliability.</li>
<li>Simplify Systems: Dig deep into our architecture to identify opportunities to simplify code and infrastructure, prioritizing changes that have a measurable business impact.</li>
<li>Leverage Modern Tooling: Use the best AI-assisted engineering tools to accelerate your workflow, improve code quality, and spend more of your time solving the high-level logic and unconventional problems</li>
<li>Foster Growth: Act as a mentor and coach, raising the technical bar for your peers through diligent PR reviews and architectural guidance.</li>
<li>Collaborate Cross-Functionally: Partner with product and design to ensure we are building the right solution for the user, not just following a specification.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You have experience with (or a desire to learn) our core technologies: Python, Django, React, Postgres, and Kafka. We also utilize Java, gRPC, and AWS.</li>
<li>Execution: You can break down complex user stories into actionable tasks and execute them with minimal guidance.</li>
<li>Strategic Mindset: You understand the &#39;why&#39; behind your code and can articulate technical trade-offs to stakeholders.</li>
<li>Experience: We recommend 8+ years of professional software engineering experience for this level.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Django, React, Postgres, Kafka, Java, gRPC, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit. It supports 9,000+ funds and SPVs, representing nearly $185B in assets under management.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7652562003</Applyto>
      <Location>London, England, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1125d83c-1eb</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>
<p>This involves writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>You will be part of a team that builds highly technical products that fulfil real, important needs in the world. We constantly push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is critical to making customers successful on our platform.</p>
<p>Our engineering teams build one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>
<p>We run thousands of Kubernetes clusters across all regions and orchestrate millions of VMs on a daily basis.</p>
<p>Competencies:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>10+ years of production level experience in one of: Java, Scala, C++, or similar language</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Experience in architecting, developing, deploying, and operating large scale distributed systems</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>
<li>Good knowledge of SQL</li>
<li>Experience with software security and systems that handle sensitive data</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6779233002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>79704b10-ff6</externalid>
      <Title>Software Engineer, Cloudforce One</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We are looking for great engineers to join our Cloudforce One team, which is responsible for identifying and disrupting cyber threats ranging from sophisticated cyber criminal activity to nation-state sponsored advanced persistent threats (APTs).</p>
<p>As a Software Engineer on this team, you will own the entire software development lifecycle,from design and architecture to deployment and monitoring,for systems that serve both threat disruption and legal response efforts.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, run, and scale distributed tools and services that support both cyber threat disruption and Legal Response efforts.</li>
<li>Develop critical data pipelines and services to collect, analyze, and expose threat intelligence data for Cloudforce One analysts and Cloudflare customers, helping to identify Tactics, Techniques, and Procedures (TTPs) and Indicators of Compromise (IOCs).</li>
<li>Extend, improve, and maintain mission-critical Trust &amp; Safety solutions, including our CSAM Scanning Tool and other legal compliance pipelines.</li>
<li>Collaborate closely with Threat Operations, Trust &amp; Safety, Legal, and Product teams to understand goals and translate complex technical requirements into elegant, scalable solutions.</li>
</ul>
<p>Requirements</p>
<ul>
<li>At least 5 years of experience building large-scale software applications, preferably distributed systems.</li>
<li>Experience designing and integrating RESTful APIs and/or gRPC services.</li>
<li>Knowledge of SQL and common relational database systems such as PostgreSQL.</li>
<li>Prior experience writing production ready code in Go and/or Typescript.</li>
<li>Familiarity with Rust.</li>
<li>Excellent debugging and optimization skills.</li>
<li>Expertise in writing well tested code.</li>
<li>Interest in opportunities to be a technical mentor for teammates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Typescript, Rust, SQL, PostgreSQL, RESTful APIs, gRPC services, Distributed systems, Debugging, Optimization, Kafka, Redis, Kubernetes, Temporal, Web security, Industry standards for access control</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7309174</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f32fed2e-9ba</externalid>
      <Title>Engineering Manager, Data Transformation</Title>
      <Description><![CDATA[<p>As an Engineering Manager of the Data Transformation team, you will lead a team of engineers, collaborate with infrastructure and product engineering orgs, and advance the Data Transformation roadmap and adoption at Stripe.</p>
<p>You will be driving critical workstreams for Stripe&#39;s topmost priorities around delivering high quality, materialized datasets Stripe products and AI agents.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Delivering infrastructure and services that scale to our users&#39; needs with an eye on reliability and efficiency</li>
<li>Leading and managing a team of talented engineers on the team, providing mentorship, guidance, and support to ensure their success</li>
<li>Working with high-visibility teams and their stakeholders to support the Infrastructure&#39;s key engineering initiatives</li>
<li>Understanding user needs and pain points to prioritize engineering work and deliver high quality solutions that meet user needs</li>
<li>Driving the execution of projects, overseeing the entire development lifecycle from planning to delivery, while maintaining high standards of quality and timely completion</li>
</ul>
<p>You will also provide hands-on technical leadership (architecture/design, vision/direction/requirements setting, and incident response processes) for your reports, work with leaders across the company to create and drive toward the longer term vision of Stripe&#39;s Data Transformation roadmap, and foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.</p>
<p>We&#39;re looking for someone who has 1-3 years of experience managing teams that shipped and operated data pipelines and critical distributed system infrastructure, successfully recruited and built great teams, and works effectively cross-functionally and is able to think rigorously, communicate effectively, and make or coordinate hard decisions and trade-offs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kafka, Flink, Spark, Airflow, Python, SQL, API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7688358</Applyto>
      <Location>N/A</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>887e0254-384</externalid>
      <Title>Engineering Manager (Platform - Identity)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Engineering Manager, you will lead the Identity Accounts team , the platform foundation that powers every user, organization, and account at Coinbase.</p>
<p>This is one of the most visible and business-critical engineering platforms at the company: your team’s services handle authentication, authorization, security settings, and account management for millions of customers across every Coinbase product.</p>
<p>You will manage engineers across three sub-teams (Foundations, Users Platformization, and Settings &amp; Account Management), drive roadmap execution in close partnership with your Tech Lead and Product Manager, and represent the team to 20+ internal product groups and key partners in Security, Risk, Compliance, and Design.</p>
<p>If you thrive at the intersection of deep technical problems and cross-functional leadership, this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and grow a team of engineers across backend, frontend, and site reliability , building a high-performing team through hiring, coaching, and career development.</li>
</ul>
<ul>
<li>Drive roadmap execution across three focused sub-teams: Foundations (authorization infrastructure), Users Platformization (decomposing the legacy monolith), and Settings &amp; Account Management (Security Settings 2.0, account navigation redesign).</li>
</ul>
<ul>
<li>Own reliability and operational excellence for 8+ mission-critical Tier-0/Tier-1 services , maintaining 99.99% uptime, championing engineering quality, and acting as quarterback during high-severity incidents.</li>
</ul>
<ul>
<li>Represent the team to internal product groups and key partners in Security, Risk, Compliance, and Design , building alignment and ensuring seamless integration support.</li>
</ul>
<ul>
<li>Partner with Product and your Tech Lead to define strategic roadmaps, prioritize initiatives, and translate complex constraints into simple, scalable platform solutions.</li>
</ul>
<ul>
<li>Champion engineering excellence , drive code and design reviews, set engineering standards, and build every capability to be composable and reusable across product lines (no bespoke, one-off integrations).</li>
</ul>
<ul>
<li>Accelerate internal customers by reducing new product team onboarding time to under 2 weeks and delivering excellent APIs, clear documentation, and strong integration support.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in software engineering, with at least 2 years of engineering management experience leading teams of 5+ engineers.</li>
</ul>
<ul>
<li>Proven track record shipping large-scale distributed systems serving millions of users in production.</li>
</ul>
<ul>
<li>Technical fluency in coding, system design, API architecture, and reliability tradeoffs , able to be hands-on when needed (writing/reviewing code, leading incidents, triaging bugs).</li>
</ul>
<ul>
<li>Strong communicator who writes clearly, builds organizational alignment, and can represent the team effectively to senior leadership and cross-functional partners.</li>
</ul>
<ul>
<li>Experience building and scaling high-performing engineering teams through hiring, developing, and promoting talent.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience in identity, authentication, authorization, or account management systems.</li>
</ul>
<ul>
<li>Prior experience leading a Platform team or similar domain with high internal customer dependency.</li>
</ul>
<ul>
<li>Familiarity with our stack: Go, gRPC, React, SpiceDB, Kubernetes, PostgreSQL, Kafka, Datadog.</li>
</ul>
<ul>
<li>Background building financial, high-reliability, or security systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>software engineering, engineering management, team leadership, technical fluency, coding, system design, API architecture, reliability tradeoffs, generative AI tools, copilots, identity, authentication, authorization, account management, Go, gRPC, React, SpiceDB, Kubernetes, PostgreSQL, Kafka, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service company that allows consumers and merchants to buy, sell, and store cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7731934</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b47cf70c-31a</externalid>
      <Title>Director,Technical Solutions (Big Data/ AI)</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>The Director of Data &amp; AI Support Engineering - Bangalore will lead and grow a regional team of Data &amp; AI technical experts in India, focused on providing resiliency and smooth operation of customer production workloads.</p>
<p>This leader will oversee support operations during APJ and EMEA business hours with close alignment with other global teams to ensure 24x7 support coverage through coordination with other regions.</p>
<p>The team resolves complex and long-running data engineering use cases raised by Databricks customers to support the success of live use cases - which includes performance optimization, ensuring resiliency of production jobs, helping customers stabilize workloads on new products and features, and more.</p>
<p>Reporting to the Global Lead of Frontline Support Engineering - Data &amp; AI, you will be able to understand the real-world business problems our customers are solving with data and are committed to helping them achieve reliability and performance of their systems to meet their goals.</p>
<p><strong>The Impact You Will Have:</strong></p>
<ul>
<li>Serve as the India site leader for an elite team of Data &amp; AI specialists that can provide coverage of customers across EMEA &amp; APJ business hours.</li>
</ul>
<ul>
<li>Grow the technical expertise of the team to support successful adoption of new products and features of the Databricks platform for customer production workloads.</li>
</ul>
<ul>
<li>Engage with top customers to understand how to support their business needs with their Data &amp; AI strategy, in collaboration with field engineering and sales when required.</li>
</ul>
<ul>
<li>Partner with internal product engineering teams to make Databricks products better and more supportable.</li>
</ul>
<ul>
<li>Understand how to maintain high reliability of the Databricks platform to successfully achieve customer business goals.</li>
</ul>
<p><strong>Competencies &amp; Requirements:</strong></p>
<ul>
<li>Proven people leadership experience: at least 6+ years as a manager of managers.</li>
</ul>
<ul>
<li>18+ years in the IT industry, with a strong background in Software Engineering with specialization in Data Engineering, ideally with Big Data &amp; Cloud experience.</li>
</ul>
<ul>
<li>Experience leading large teams (100+ employees) in engineering, technical support, or consulting. Support experience is not required - but customer facing experience is highly desirable.</li>
</ul>
<ul>
<li>Hands-on experience in at least two of the following at production scale:</li>
</ul>
<ul>
<li>Big Data (Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Machine Learning / Artificial Intelligence projects</li>
</ul>
<ul>
<li>Data Science / Streaming use cases</li>
</ul>
<ul>
<li>Spark expertise is a big advantage.</li>
</ul>
<ul>
<li>Strong background in customer-facing support leadership roles.</li>
</ul>
<ul>
<li>Excellent troubleshooting skills across distributed systems.</li>
</ul>
<ul>
<li>Strong ownership mindset with ability to thrive in a fast-paced, startup-like environment with evolving needs.</li>
</ul>
<ul>
<li>Bachelor’s/Master’s in Computer Science or equivalent technical field.</li>
</ul>
<p><strong>Benefits:</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p><strong>Our Commitment to Diversity and Inclusion:</strong></p>
<p>We are committed to fostering an inclusive culture where everyone feels valued, respected, and empowered to contribute their best work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Machine Learning, Artificial Intelligence, Data Science, Streaming use cases, Spark, Hadoop, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8409447002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>03807164-210</externalid>
      <Title>Resident Solutions Architect</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the Manager, Professional Services.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical Big Data projects which may include building reference architectures, how-to&#39;s and production grade MVPs</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>
</ul>
<ul>
<li>6+ years of experience working on Big Data Architectures independently</li>
</ul>
<ul>
<li>Strong experience working in the Databricks ecosystem</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala.</li>
</ul>
<ul>
<li>Experience working across Cloud Platforms (GCP / AWS / Azure)</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Build skills in technical areas that support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, GCP, AWS, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8081658002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b8624a9-e1b</externalid>
      <Title>Staff Backend Software Engineer, Ads Business Manager</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer on the Ads Business Manager team, you will develop a long-term technical strategy to unlock the next tier of agency enablement on the Reddit Ads Platform.</p>
<p>This is a high-agency position for an engineer who can navigate ambiguity and take decisive ownership of the technical direction in collaboration with other engineers, teams, and stakeholders.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead large cross-functional projects end to end, from concept, design, and implementation through to launch and driving adoption all while ensuring the highest quality and performance.</li>
</ul>
<ul>
<li>Have a strong product sense and be able to run customer interviews, translating data and user feedback into features that inform the team’s roadmap.</li>
</ul>
<ul>
<li>Mentor engineers and leaders, share industry knowledge, and contribute to the technical growth of the team.</li>
</ul>
<ul>
<li>Disambiguate complex problems, align stakeholders with different priorities, and aggressively prioritize to execute effectively.</li>
</ul>
<ul>
<li>Be able to make system level improvements, enhancements and implement complex code modifications.</li>
</ul>
<ul>
<li>Collaborate closely with engineering teams and stakeholders to integrate Business Manager capabilities into broader infrastructure and use cases across Reddit.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>7+ years software engineering experience building production services at scale.</li>
</ul>
<ul>
<li>Ads domain experience.</li>
</ul>
<ul>
<li>Excellent communication skills to collaborate with a service-oriented team and company.</li>
</ul>
<ul>
<li>Experience coordinating large-scale, cross-functional efforts that span different teams, and driving alignment between diverse stakeholders.</li>
</ul>
<ul>
<li>Experience in solving complex system scaling and latency performance problems.</li>
</ul>
<ul>
<li>Strong proficiency in one or more: Go, Python; plus experience with service frameworks (gRPC/Thrift) and API design.</li>
</ul>
<ul>
<li>Experience with distributed systems, data modeling, and event-driven architectures (e.g., Kafka/PubSub).</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Previous experience as a Tech Lead or similar function.</li>
</ul>
<ul>
<li>Experience building solutions for advertising agencies or other global enterprise customers.</li>
</ul>
<p>Our Stack:</p>
<ul>
<li>Go, Python; gRPC/Thrift; Kafka; Postgres, BigQuery, Redis, Cassandra, SpiceDB (ReBAC); Kubernetes; AWS/GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,000-$303,900 USD</Salaryrange>
      <Skills>Go, Python, gRPC/Thrift, Kafka, Postgres, BigQuery, Redis, Cassandra, SpiceDB (ReBAC), Kubernetes, AWS/GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7590453</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ece4c581-f94</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid-senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a provider of identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7774364</Applyto>
      <Location>New York, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4b4378c3-f92</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join our Advertising, Company Intelligence, and Intent team. As a key member of our engineering team, you&#39;ll design and implement the core systems that power our real-time marketing platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and building distributed systems that process, enrich, and respond to billions of behavioral events per day in real time</li>
<li>Developing high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform</li>
<li>Leveraging machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making</li>
<li>Building intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights</li>
<li>Designing and operating data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads</li>
<li>Driving quality, performance, scalability, and observability across all systems you own</li>
<li>Collaborating cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling</li>
<li>Contributing to technical leadership and mentorship of teammates</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership. You should have strong experience in at least one of the following areas:</p>
<ul>
<li>Distributed systems engineering</li>
<li>Big data infrastructure</li>
<li>Applied AI/ML</li>
</ul>
<p>You should also be proficient in one or more core languages (Java, Go, Python), have a solid grasp of SQL and large-scale data modeling, and familiarity with databases and tools such as ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.</p>
<p>Bonus points if you have experience in ad tech, real-time bidding (RTB), or programmatic systems, background in identity resolution, attribution, or behavioral analytics at scale, contributions to open source in ML, infrastructure, or data tooling, or strong product instincts and a passion for building tools that drive meaningful outcomes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Distributed systems engineering, Big data infrastructure, Applied AI/ML, Java, Go, Python, SQL, ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8340521002</Applyto>
      <Location>Bethesda, Maryland, United States; Remote US - PST; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f38b4fcf-88f</externalid>
      <Title>Staff Software Engineer, Organization</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Organizations team. As a Staff Software Engineer, you will help drive architectural vision and strategy on the team to design and deliver powerful new enterprise functionality for our SaaS customers. You will identify and implement strategic technical improvements to our codebase and architecture, orchestrate and lead major technical projects, and mentor and coach less experienced engineers on sound engineering practices and technical leadership.</p>
<p>You will work closely with the Product Manager and Product Designer to define the look, feel, and functionality of new features and review customer feedback. You will also serve as a subject matter expert on all building scalable, reliable, and maintainable distributed systems.</p>
<p>To be successful in this role, you will need to have solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems. You will also need to have worked on projects that required close collaboration with external teams and have experience making those a success.</p>
<p>You will be a good mentor and communicator, able to explain complex concepts simply in person or in a document. You will know that while an engineer can write code, teams collaborate to ship successful products.</p>
<p>You will have solid previous experience with Node.js (JavaScript or Typescript) to build scalable backend services and creating and maintaining public and internal APIs. You will also have built frontend and full-stack apps and know what approach to use when.</p>
<p>You will have a good understanding of SQL databases and know how to debug and optimize table and query structure for performance under load. You will also have experience with Docker and cloud environments (AWS and Azure preferred).</p>
<p>Bonus points for experience with Kubernetes, knowledge of authentication protocols such as OAuth2, OIDC, SAML, understanding of event-driven architectures, especially Apache Kafka, understanding and experience of DevOps culture, and knowledge of security engineering and application security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>€74.000-€102.000 EUR</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, SQL databases, Docker, cloud environments, AWS, Azure, Kubernetes, authentication protocols, OAuth2, OIDC, SAML, event-driven architectures, Apache Kafka, DevOps culture, security engineering, application security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7560775</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f3f1713-f74</externalid>
      <Title>Systems Reliability Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code.</p>
<p>As a Systems Reliability Engineer on one of our Production Engineering teams, you&#39;ll be building the tools to help engineers deploy and operate the services that make Cloudflare work. Our mission is to provide a reliable, yet flexible, platform to help product teams release new software efficiently and safely.</p>
<p>Core platforms we operate at Cloudflare include:</p>
<ul>
<li>Kubernetes</li>
<li>Kafka</li>
<li>Developer tools, CI, and CD systems</li>
<li>Vault, Consul</li>
<li>Terraform</li>
<li>Temporal Workflows</li>
<li>Cloudflare Developer Platform</li>
</ul>
<p>Responsibilities</p>
<ul>
<li>Build software that automates the operation of large, highly-available distributed systems.</li>
<li>Ensure platform security, and guide security best practices</li>
<li>Document your work and guide fellow developers towards optimal solutions</li>
<li>Contribute back to the open source community</li>
<li>Leave code better than we found it</li>
</ul>
<p>Requirements</p>
<ul>
<li>Recent career experience with Go or Python and at least 3 years experience in the role of full-time software engineer (any language). Rust is an added bonus.</li>
<li>Experience with deploying and managing services using Docker on Linux</li>
<li>A firm grasp of IP networking, load balancing and DNS</li>
<li>Excellent debugging skills in a distributed systems environment</li>
<li>Source control experience including branching, merging and rebasing (we use git)</li>
<li>The ability to break down complex problems and drive towards a solution</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience with Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs on Kubernetes</li>
<li>Operational experience deploying and managing large systems on bare metal</li>
<li>Experience as a Site Reliability Engineer (SRE) for a large-scale company</li>
<li>You have practical knowledge of web and systems performance, and extensively used tracing tools like ebpf and strace.</li>
<li>Alerting and monitoring (Prometheus/Alert Manager), Configuration Management (salt)</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Docker, Linux, IP networking, load balancing, DNS, source control, git, Kubernetes, Kafka, Vault, Consul, Terraform, Temporal Workflows, Cloudflare Developer Platform, Rust, Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs, ebpf, strace, Prometheus, Alert Manager, salt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that operates one of the world&apos;s largest networks, powering millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7453074</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f06742a2-a51</externalid>
      <Title>Senior Software Engineer (Data Platform)</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. Our engineering teams build technical products that fulfill real, important needs in the world. We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>
<p>As a Senior Software Engineer working on the Data Platform team, you will help build the Data Intelligence Platform for Databricks that will allow us to automate decision-making across the entire company. You will achieve this in collaboration with Databricks Product Teams, Data Science, Applied AI and many more. You will develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, data consumption layers etc. You will do this using the latest, bleeding-edge Databricks product and other tools in the data ecosystem - the team also functions as a large, production, in-house customer that dog foods Databricks and guides the future direction of the product.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and run the Databricks metrics store that enables all business units and engineering teams to bring their detailed metrics into a common platform for sharing and aggregation, with high quality, introspection ability and query performance.</li>
</ul>
<ul>
<li>Design and run the cross-company Data Intelligence Platform, which contains every business and product metric used to run Databricks. You’ll play a key role in developing the right balance of data protections and ease of shareability for the Data Intelligence Platform as we transition to a public company.</li>
</ul>
<ul>
<li>Develop tooling and infrastructure to efficiently manage and run Databricks on Databricks at scale, across multiple clouds, geographies and deployment types. This includes CI/CD processes, test frameworks for pipelines and data quality, and infrastructure-as-code tooling.</li>
</ul>
<ul>
<li>Design the base ETL framework used by all pipelines developed at the company.</li>
</ul>
<ul>
<li>Partner with our engineering teams to provide leadership in developing the long-term vision and requirements for the Databricks product.</li>
</ul>
<ul>
<li>Build reliable data pipelines and solve data problems using Databricks, our partner’s products and other OSS tools. Provide early feedback on the design and operations of these products.</li>
</ul>
<ul>
<li>Establish conventions and create new APIs for telemetry, debug, feature and audit event log data, and evolve them as the product and underlying services change.</li>
</ul>
<ul>
<li>Represent Databricks at academic and industrial conferences &amp; events.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL frameworks, metrics stores, infrastructure management, data security, large-scale messaging systems, workflow or orchestration frameworks, Airflow, DBT, Kafka, RabbitMQ</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks develops and operates a data and AI infrastructure platform for businesses.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7647369002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ab2d4d68-d1c</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>We are seeking a talented Software Engineer to join our X Money team, focused on building a revolutionary global payment network that will serve over 600 million users and rival the world&#39;s largest financial institutions.</p>
<p>In this role, you will specialise in backend development, designing and optimising robust microservices to ensure scalability, security, and reliability. You will support full-stack efforts, collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and optimise microservices for high-concurrency transactions using Go, Postgres, and Kafka.</li>
<li>Collaborate on system architecture, testing, and monitoring to ensure uptime and performance.</li>
<li>Build internal tools using frontend technologies as needed to support operational efficiency.</li>
<li>Support the Technical Lead in risk mitigation and align with engineering, product, and compliance teams to drive project success.</li>
<li>Contribute to the development of secure, scalable systems for handling financial data and transactions.</li>
<li>Iterate rapidly on feedback to deliver high-quality solutions in a dynamic environment.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>5+ years of software engineering experience, with a strong focus on backend development.</li>
<li>Proficiency in Go or similar languages and experience with databases (e.g., Postgres) and streaming systems (e.g., Kafka).</li>
<li>Familiarity with building distributed systems for high-scale, low-latency environments.</li>
<li>Knowledge of handling secure financial data.</li>
<li>Ability to contribute to frontend development for internal tools when required.</li>
<li>Strong communication and problem-solving skills, with a collaborative mindset.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in fintech, payments, or regulatory frameworks (e.g., PCI-DSS, AML/KYC).</li>
<li>Prior work in a fast-paced, startup-like environment on greenfield projects.</li>
<li>Comfort navigating ambiguous requirements and iterating based on feedback.</li>
<li>Passion for leveraging AI to transform financial systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Postgres, Kafka, backend development, databases, streaming systems, secure financial data, frontend development, fintech, payments, regulatory frameworks, PCI-DSS, AML/KYC, fast-paced environment, greenfield projects, AI transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5007310007</Applyto>
      <Location>Tokyo, JP</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc23dcd4-30e</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Software Engineer to join our Ads team. As a backend engineer, you&#39;ll work on building scalable microservices and APIs that power our advertiser-facing product, ads.reddit.com. You&#39;ll also collaborate with the platform and data teams to build new features and improve operational stability.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Working with product managers to design and implement Ads products</li>
<li>Collaborating closely with the platform and data teams while building new features</li>
<li>Leading the processes needed to improve operational stability, including improving code quality, delivering dashboards and data visualizations</li>
<li>Building extensible components that align with product objectives</li>
<li>Supporting day-to-day project management tasks, including communicating project updates, managing project timelines, and overseeing project execution</li>
</ul>
<p>To succeed in this role, you&#39;ll need:</p>
<ul>
<li>3+ years of software development experience in one or more general-purpose programming languages (Java, Scala, Go, C++, Python)</li>
<li>Ability to take complete ownership of a feature or project</li>
<li>Experience working in the Ads domain is a plus</li>
<li>Interest in the advertising business and understanding customer needs is a plus</li>
</ul>
<p>We offer a range of benefits, including global benefit programs, family planning support, gender-affirming care, mental health and coaching benefits, comprehensive medical benefits, and more.</p>
<p>If you&#39;re passionate about building scalable and reliable software systems, and want to join a team that&#39;s dedicated to innovation and growth, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Scala, Kafka, Postgres, BigQuery, Redis, Druid, Kubernetes, Argo, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6909093</Applyto>
      <Location>Remote - Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a6f0739-e83</externalid>
      <Title>Senior Staff Machine Learning Engineer, Growth Platform Engineering</Title>
      <Description><![CDATA[<p>The Growth Platform team&#39;s vision is to drive long-term sustainable growth for the Airbnb community. Our mission is to build a best-in-class agentic system, and capabilities to support the growth of all Airbnb products, current and future.</p>
<p>We achieve this by delivering highly personalised and relevant content and product experiences to the Airbnb community, both on and off of the Airbnb platform. The north star is full autonomy , where AI identifies opportunities, creates campaigns, personalises experiences, and optimises outcomes with minimal human intervention.</p>
<p>As a machine learning engineer or scientist, your expertise will be pivotal in developing AI-powered solutions to shape the future of the Airbnb agentic growth platform with cutting-edge AI techniques. You will drive and guide the rest of the engineers to brainstorm, design and develop AI products and features from inception to production.</p>
<p>Some example projects you will work on:</p>
<ul>
<li>AI-Powered Content Generation</li>
<li>ML/AI Orchestration for Decisioning</li>
<li>Proactive Marketing Analyst Agent</li>
</ul>
<p>A typical day will involve working with large scale structured and unstructured data; exploring, experimenting, building and continuously improving Machine Learning models and pipelines for Airbnb product, business and operational use cases.</p>
<p>You will work collaboratively with cross-functional partners including product managers, operations and data scientists, to identify opportunities for business impact; understand, refine, and prioritise requirements for machine learning, and drive engineering decisions.</p>
<p>Hands-on develop, productionise, and operate ML/AI models and pipelines at scale, including both batch and real-time use cases.</p>
<p>Leverage third-party and in-house Machine Learning tools &amp; infrastructure to develop reusable, highly differentiating and high-performing Machine Learning systems, enable fast model development, low-latency serving and ease of model quality upkeep.</p>
<p>Collaborate actively with engineers to apply ML / AI in their solutions to help validate ideas and guide to the right outcomes.</p>
<p>Partner with ML/AI Engineers in foundations engineering to mentor and develop initiatives that make ML/AI applications a core discipline for non-ML/AI engineers.</p>
<p>Your expertise will be crucial in developing AI-powered solutions to shape the future of the Airbnb agentic growth platform with cutting-edge AI techniques.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, AI, Python, Java, C++, TensorFlow, PyTorch, Kubernetes, Airflow, Kafka, Agentic and Automation, Agile Practice for AI Production, Infrastructure Acumen</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7747259</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3299844-c42</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Opportunity</strong></p>
<p>The Migration Services team builds the critical, data-driven services that seamlessly move customers across environments in real-time. We are looking for a Senior Software Engineer who is passionate about crafting elegant solutions to complex distributed systems problems. You will be a key player in driving innovation, collaborating with architects and product managers to build and own the crucial infrastructure that underpins the Auth0 ecosystem. If you are excited by the prospect of making a massive impact, we want to hear from you!</p>
<p><strong>What You&#39;ll Achieve</strong></p>
<ul>
<li>Build for scale. You will develop, and operate highly scalable, data-intensive services, demonstrating code craftsmanship and an eye for detail.</li>
<li>Master the data stream. You&#39;ll leverage streaming technologies and implement advanced change data capture (CDC) strategies to ensure the secure, reliable, and efficient transfer of data.</li>
<li>Drive operational excellence. Through continuous monitoring and performance tuning, you will enhance the reliability of our migration processes and participate in our team&#39;s on-call rotation to ensure our services are always on.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Proven engineering background. With 3+ years of experience in fast-paced, agile environments, you have a proven track record of shipping high-quality software.</li>
<li>Database familiarity. You possess a strong understanding of database fundamentals and have hands-on experience with datastores like MongoDB and PostgreSQL.</li>
<li>Go is your go-to. You have a strong proficiency in Golang or optionally, in node.js.</li>
<li>A passion for reliability. You have interest and experience in reliability engineering, with familiarity with observability and incident management.</li>
<li>Collaborative skills. Your excellent written and verbal communication skills enable you to collaborate effectively with cross-functional and geo-dispersed teams.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with distributed streaming platforms like Kafka.</li>
<li>Familiarity with concepts in the IAM (Identity and Access Management) domain.</li>
<li>Experience with cloud providers (AWS, Azure) and container technologies such as Kubernetes and Docker.</li>
</ul>
<p>#Hybrid</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, MongoDB, PostgreSQL, Distributed systems, Reliability engineering, Observability, Incident management, Kafka, IAM, Cloud providers, Container technologies, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7809897</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3922bc3d-027</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>Some example teams you can join include:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p>The ideal candidate will have:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>10+ years of production-level experience in one of: Java, Scala, C++, or similar language</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Experience in architecting, developing, deploying, and operating large-scale distributed systems</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>
<li>Good knowledge of SQL</li>
<li>Experience with software security and systems that handle sensitive data</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544443002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc38e24f-97e</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Machine Learning Engineer to join our Ads Engineering team. As a key member of our team, you will design and build production ML systems that power core experiences across the platform, including personalized recommendations, search, and ranking systems, intelligent advertising systems, and large-scale machine learning pipelines.</p>
<p>Our team is responsible for building systems that operate at internet scale and directly influence user experience, advertiser value, and business outcomes. You will work on high-impact systems that improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems.</p>
<p>As a Senior Machine Learning Engineer, you will:</p>
<ul>
<li>Design, build, and deploy production-grade machine learning models and systems at scale</li>
<li>Own the full ML lifecycle: from problem definition and feature engineering to training, evaluation, deployment, and monitoring</li>
<li>Build scalable data and model pipelines with strong reliability, observability, and automated retraining</li>
<li>Work with large-scale datasets to improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems</li>
<li>Partner cross-functionally with Product, Data Science, Infrastructure, and Engineering teams to translate complex problems into ML solutions</li>
<li>Improve system performance across latency, throughput, and model quality metrics</li>
<li>Research and apply state-of-the-art machine learning and AI techniques, including deep learning, graph &amp; transformers based, and LLM evaluation/alignment</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>3-5+ years of experience building, deploying, and operating machine learning systems in production</li>
<li>Strong programming skills in Python, Java, Go, or similar languages, with solid software engineering fundamentals</li>
<li>ML Fundamentals: a strong grasp of algorithms, from classic statistical learning (XGBoost, Random Forests, regressions) to DL architectures (Transformers, CNNs, GNNs)</li>
<li>Hands-on experience with modern ML frameworks (e.g., PyTorch, TensorFlow)</li>
<li>Experience designing scalable ML pipelines, data processing systems, and model serving infrastructure</li>
<li>Ability to work cross-functionally and translate ambiguous product or business problems into technical solutions</li>
<li>Experience improving measurable metrics through applied machine learning</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with recommender systems, search/ranking systems, advertising/auction systems, large-scale representation learning, or multimodal embedding systems</li>
<li>Familiarity with distributed systems and large-scale data processing (Spark, Kafka, Ray, Airflow, BigQuery, Redis, etc.)</li>
<li>Experience working with real-time systems and low-latency production environments</li>
<li>Background in feature engineering, model optimization, and production monitoring</li>
<li>Experience with LLM/Gen AI techniques, including but not limited to LLM evaluation, alignment, fine-tuning, knowledge distillation, RAG/agentic systems and productionizing LLM-powered products at scale</li>
<li>Advanced degree in Computer Science, Machine Learning, or related quantitative field</li>
</ul>
<p>Potential Teams:</p>
<ul>
<li>Ads Measurement Modeling</li>
<li>Ads Targeting and Retrieval</li>
<li>Advertiser Optimization</li>
<li>Ads Marketplace Quality</li>
<li>Ads Creative Effectiveness</li>
<li>Ads Foundational Representations</li>
<li>Ads Content Understanding</li>
<li>Ads Ranking</li>
<li>Feed Relevance</li>
<li>Search and Answers Relevance</li>
<li>ML Understanding</li>
<li>Notifications Relevance</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p>Pay Transparency:</p>
<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave. To learn more, please visit https://www.redditinc.com/careers/. To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below. The base salary range for this position is $216,700-$303,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$216,700-$303,400 USD</Salaryrange>
      <Skills>Python, Java, Go, PyTorch, TensorFlow, XGBoost, Random Forests, Regressions, Transformers, CNNs, GNNs, Spark, Kafka, Ray, Airflow, BigQuery, Redis, Recommender systems, Search/ranking systems, Advertising/auction systems, Large-scale representation learning, Multimodal embedding systems, Distributed systems, Large-scale data processing, Real-time systems, Low-latency production environments, Feature engineering, Model optimization, Production monitoring, LLM/Gen AI techniques, LLM evaluation, Alignment, Fine-tuning, Knowledge distillation, RAG/agentic systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors, operating a vast network of communities centered around shared interests.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6960831</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe0d53c0-05e</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>
<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>
<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>
<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>
<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>
<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>
<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>
<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>
<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>
<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>
<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>
</ul>
<ul>
<li>Key use cases moving from &#39;win&#39; to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>
<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>
<li>Executive and operational governance</li>
<li>Proactively provide internal and external updates</li>
<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>
<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. The company was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482406002</Applyto>
      <Location>Seoul, South Korea</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>17d99112-d46</externalid>
      <Title>Software Engineer, Product Catalogs</Title>
      <Description><![CDATA[<p>We are looking for a skilled backend software engineer to join the Product Catalogs team at Reddit. Our team builds products and infrastructure that enable retail advertisers to succeed on Reddit.</p>
<p>As a software engineer on this team, you will have the opportunity to work on projects such as catalog system scaling, catalog management, and product enhancement. You will develop, maintain, and scale our product catalogs backend, contribute to the development of features to make our product easier to use, and produce robust and sustainable code.</p>
<p>To be successful in this role, you will need a bachelor&#39;s degree or equivalent experience in a quantitative or computer science-related field, 4+ years of full-time backend software engineering experience in a scalable computing environment, and strong communication and collaboration skills.</p>
<p>We offer a dynamic work environment, opportunities for professional growth and development, a competitive salary and benefits package, and flexible work arrangements.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Go, gRPC, Thrift, Baseplate, Kafka, Postgres, BigQuery, Redis, TiDB, Kubernetes, Airflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7761320</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80b94e35-0f3</externalid>
      <Title>Staff Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Staff Technical Solutions Engineer with over 12+ years of experience to join our Platform Support team. This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 12 years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimisation.</li>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7845334002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64989723-d54</externalid>
      <Title>Staff Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Streaming Foundations team. As a Staff Software Engineer, you will help set the technical direction for the team and influence the engineering roadmap for the Platform&#39;s streaming capabilities. You will design and lead the implementation of our most complex and critical systems for data-intensive use cases. You will research and champion new technologies and architectural patterns to solve strategic challenges and scale the platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Helping set the technical direction for the team and influencing the engineering roadmap for the Platform&#39;s streaming capabilities</li>
<li>Designing and leading the implementation of our most complex and critical systems for data-intensive use cases</li>
<li>Researching and championing new technologies and architectural patterns to solve strategic challenges and scale the platform</li>
<li>Leading and influencing cross-functional initiatives, ensuring technical alignment and successful execution across multiple teams</li>
<li>Improving the operational posture of our systems by designing for observability, reliability, and scalability, and by mentoring others in operational best practices</li>
<li>Coaching and mentoring senior engineers and acting as a technical leader across the engineering organization</li>
</ul>
<p>You will bring to our teams:</p>
<ul>
<li>5+ years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p>Extra points:</p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Golang, Java, database fundamentals, event streaming technologies, Kafka, scalable systems, secure systems, TypeScript, React, cloud providers, container technologies, Kubernetes, Docker, Identity and Access Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630523</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f64d6ed-6a9</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our team. As a Senior Software Engineer, you will build, evolve, and operate backend services at scale for ZoomInfo. You&#39;ll work primarily with Node.js/TypeScript (NestJS preferred), design robust REST/GraphQL APIs, optimize MongoDB/Redis, and deploy on cloud (GCP preferred or AWS) with a strong focus on reliability, performance, security, and cost efficiency.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement, and own microservices and REST/GraphQL APIs in Node.js/TypeScript (NestJS preferred)</li>
<li>Translate product requirements into technical designs; break down work, estimate, and deliver incrementally</li>
<li>Model data and optimize queries in MongoDB; implement effective caching with Redis (TTL, eviction, hot-key mitigation)</li>
<li>Ship production-ready code with unit/integration tests; participate in on-call, incident response, and postmortems</li>
<li>Containerize and deploy via Docker/Kubernetes; automate builds and releases with CI/CD (blue/green or canary)</li>
<li>Instrument services for logs, metrics, and traces (p95/p99); continuously improve latency, reliability, and cost</li>
<li>Review code, document designs, and mentor SE II/III engineers; contribute to shared standards and best practices</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of software engineering experience, including 3+ years building backend services in Node.js/TypeScript</li>
<li>Strong API fundamentals: versioning, pagination, authN/Z (OAuth/OIDC), and secure coding (OWASP)</li>
<li>Hands-on with NestJS/Express/Fastify; familiarity with microservices patterns and event-driven workflows</li>
<li>MongoDB expertise (schema design, indexing, basic sharding concepts) and Redis caching patterns</li>
<li>Cloud experience on GCP (preferred) or AWS; Docker; working knowledge of Kubernetes; CI/CD with GitHub Actions/Jenkins/GitLab</li>
<li>Observability skills: Datadog/OpenTelemetry/Prometheus/Grafana; confident debugging in production</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Kafka or Pub/Sub; API Gateway/Ingress; feature flags; rate limiting and quotas</li>
<li>Terraform/Helm; security tooling (SonarQube), dependency hygiene, secret management</li>
<li>Performance profiling, load testing, and practical cost optimization</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, TypeScript, NestJS, MongoDB, Redis, Docker, Kubernetes, CI/CD, API fundamentals, Microservices, Event-driven workflows, Observability, Kafka, Pub/Sub, API Gateway, Ingress, Feature flags, Rate limiting, Quotas, Terraform, Helm, Security tooling, Dependency hygiene, Secret management, Performance profiling, Load testing, Cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a publicly traded company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8305634002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>087e2e06-4fb</externalid>
      <Title>Staff Machine Learning Engineer, Ads Auction (Ads Marketplace Quality)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Machine Learning Engineer to join our Ads Marketplace Quality team. As a key member of the team, you will be responsible for developing and executing a vision to improve our Ads Marketplace at Reddit. You will develop a deep understanding of our marketplace dynamics and identify areas of improvement by getting to the bottom of data, design, implement and ship algorithms to production that improve our ads marketplace efficiency.</p>
<p>In this role, you will specialize in improving and optimizing our ads auction and pricing mechanism which will have a direct impact on upleveling the utility for both our advertiser and user values. You will also have the opportunity to work on other org-wide strategic initiatives such as supply optimization and ad relevance, where you will drive and execute on Reddit’s vision to transform Reddit into an advertising platform that shows the right ads to the right users at the right time in the right context.</p>
<p>As a Staff Machine Learning Engineer in the Ads Marketplace Quality team, you will be an industry technical leader with domain knowledge in ads marketplace dynamics, auction and pricing, you will research, formulate, and execute on our mission to build end-to-end algorithmic solutions and deliver values to all the three-sided participants to our marketplace.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and oversee the strategy development, quarterly planning and day-to-day execution of initiatives related to ads marketplace, auction and pricing.</li>
<li>Proactively further our understanding of marketplace dynamics and develop algorithms to improve the efficiency and effectiveness of our ads marketplace, auction and pricing.</li>
<li>Oversee end-to-end ML workflows,from data ingestion and feature engineering to model training, evaluation, and deployment,that optimizes the ads marketplace efficiency.</li>
<li>Be a mentor, lead both junior and senior engineers in implementing technical designs and reviews. Fostering a culture of innovation, technical excellence, and knowledge sharing across the organization.</li>
<li>Be a cross-functional advocate for the team, collaborate with cross-functional teams (e.g., product management, data science, PMM, Sales etc.) to innovate and build products.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>8+ years of experience with industry-level product development, with at least 5+ years focused on data-driven, marketplace-optimization problem space at scale.</li>
<li>Strong knowledge of ads marketplace optimization. Demonstrated experience architecting ads marketplace design, improving and optimizing ads auction and pricing mechanisms.</li>
<li>Solid understanding of large-scale data processing, distributed computing, and data infrastructure (e.g., Spark, Kafka, Beam, Flink).</li>
<li>Proficiency in machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries for feature engineering, model training, and inference.</li>
<li>Proficiency with programming languages (Java, Python, Golang, C++, or similar) and statistical analysis.</li>
<li>Proven technical leadership in cross-functional settings, driving architectural decisions and influencing stakeholders (product, data science, privacy, legal).</li>
<li>Excellent communication, mentoring, and collaboration skills to align teams on a long-term vision for ads marketplace optimization.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits</li>
<li>401k Matching</li>
<li>Workspace benefits for your home office</li>
<li>Personal &amp; Professional development funds</li>
<li>Family Planning Support</li>
<li>Flexible Vacation (please use them!) &amp; Reddit Global Wellness Days</li>
<li>4+ months paid Parental Leave</li>
<li>Paid Volunteer time off</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>machine learning, ads marketplace optimization, large-scale data processing, distributed computing, data infrastructure, Spark, Kafka, Beam, Flink, TensorFlow, PyTorch, feature engineering, model training, inference, programming languages, statistical analysis, technical leadership, cross-functional settings, architectural decisions, influencing stakeholders</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7181821</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6d7f1a0-882</externalid>
      <Title>Resident Solutions Architect - Mumbai</Title>
      <Description><![CDATA[<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>
<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>
<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>
<li>Providing expert-level technical guidance and support to customers during the implementation process</li>
<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>
<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>
<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>
<li>Comfortable writing code in Python, PySpark, and Scala</li>
<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>
<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>
<li>Expertise in Azure</li>
<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>
<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>
<li>Ability to work with Partner Organization and deliver complex programs</li>
<li>Ability to lead large technical delivery teams</li>
<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>
<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>
<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>
<li>Willingness to travel for onsite customer engagements within India</li>
<li>Documentation and white-boarding skills</li>
</ul>
<p>Good-to-have Skills:</p>
<ul>
<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>
<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>
<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>
<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>
<li>Expertise on cloud platforms like AWS and GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8107166002</Applyto>
      <Location>Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ef77a56-d6f</externalid>
      <Title>Staff Software Engineer - Tax Engineering</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a Staff Software Engineer to technically lead the Tax Engineering team within the Consumer Product Group.</p>
<p>Tax Engineering sits at the intersection of every trade, every payment, and every product Coinbase ships on the hot path.</p>
<p>As the Staff Software Engineer on the team you&#39;ll define multi-quarter technical strategies, build systems with stringent correctness and scalability requirements, and set the technical direction for how Coinbase handles one of the most complex domains in financial services.</p>
<p>Ownership &amp; impact</p>
<p>In this role, you will:</p>
<ul>
<li>Own the architecture and evolution of real-time and offline systems that calculate, track, and report taxes for crypto transactions at scale , ensuring correctness, low latency, and 24x7 availability.</li>
</ul>
<ul>
<li>Define multi-quarter technical strategies for the Tax Platform, identifying opportunities to simplify complexity, improve reliability, and expand capabilities as Coinbase launches new asset types and products.</li>
</ul>
<ul>
<li>Architect and build distributed systems that power tax calculation engines, cost basis tracking, and tax reporting APIs , serving millions of customers with strict accuracy requirements.</li>
</ul>
<ul>
<li>Lead technical design and code reviews, setting standards for quality, performance, and maintainability across the team.</li>
</ul>
<ul>
<li>Mentor engineers and elevate the technical bar.</li>
</ul>
<ul>
<li>Partner cross-functionally with product, data, compliance, and frontend teams to deliver tax features that meet regulatory requirements and delight customers , from annual tax reports to real-time gain/loss calculations.</li>
</ul>
<ul>
<li>Drive operational excellence by owning system reliability, incident response, and performance optimization for critical tax infrastructure that operates at the scale and speed of crypto markets.</li>
</ul>
<p>Minimum qualifications</p>
<ul>
<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>
</ul>
<ul>
<li>Proven track record designing, building, scaling, and maintaining production-level distributed systems with stringent correctness and availability requirements.</li>
</ul>
<ul>
<li>Strong experience with backend languages (e.g., Go, Python, or similar) and modern infrastructure patterns including microservices, event-driven architectures, and REST/GraphQL API design.</li>
</ul>
<ul>
<li>Deep expertise in data-intensive systems , experience with Kafka, Clickhouse, or similar tools for real-time and batch processing at scale.</li>
</ul>
<ul>
<li>Demonstrated technical leadership: leading large projects with long-term impact, mentoring engineers, and driving alignment across teams on technical strategy.</li>
</ul>
<ul>
<li>Excellent judgment on prioritization and the ability to break down ambiguous problems into actionable technical plans.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with tax systems, cost basis engines, 1099 reporting, or financial compliance infrastructure.</li>
</ul>
<ul>
<li>Familiarity with equities, options, or margin trading or strong interest in learning trading/brokerage domains.</li>
</ul>
<ul>
<li>Background at a tech-focused company (fintech, crypto, high-growth startup) rather than traditional finance.</li>
</ul>
<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$217,900-$217,900 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,900-$217,900 CAD</Salaryrange>
      <Skills>software engineering, backend languages, microservices, event-driven architectures, REST/GraphQL API design, data-intensive systems, Kafka, Clickhouse, generative AI tools, copilots</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7773216</Applyto>
      <Location>Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>50f401de-7b1</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Who we are</p>
<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>As we continue to revolutionize how the world interacts, we&#39;re acquiring new skills and experiences that make work feel truly rewarding.</p>
<p>Your career at Twilio is in your hands.</p>
<p>We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions!</p>
<p>Join the team as Twilio&#39;s next Staff Software Engineer</p>
<p>About the job</p>
<p>This position is needed to harden, optimize, and scale the real-time event-aggregation services that power our Observability Insights/Analytics platform.</p>
<p>We are seeking a Staff Software Engineer with deep Java expertise to own high-throughput stream-processing microservices (Kafka Streams / Flink) deployed on AWS EKS, tune ClickHouse for millisecond-latency writes, and embed observability that keeps incident minutes near zero.</p>
<p>You will design resilient, high-performance systems capable of processing &gt;250K events/sec with p99 latencies under 200ms, while championing DevSecOps practices and mentoring junior engineers.</p>
<p>Responsibilities</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Design, build, and maintain high-performance Java microservices using Spring Boot, capable of ingesting &gt;250K events/sec with p99</li>
</ul>
<p>Qualifications</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>8+ years of professional Java development experience with mastery of high-performance and low-latency design patterns</li>
</ul>
<ul>
<li>Production experience with Kafka Streams, Flink, or comparable stream-processing frameworks for building real-time data pipelines</li>
</ul>
<ul>
<li>Hands-on ClickHouse (or columnar database) performance tuning and SQL optimization expertise</li>
</ul>
<ul>
<li>Proven success operating AWS-hosted microservices at scale with solid Linux, Docker, and Kubernetes knowledge</li>
</ul>
<ul>
<li>Strong observability mindset including metrics, tracing, alerting, and post-incident analysis capabilities</li>
</ul>
<ul>
<li>Excellent communication skills and a bias toward collaborative problem-solving in cross-functional team environments</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience migrating single-region services to multi-region active-active topologies for high availability</li>
</ul>
<ul>
<li>Familiarity with data-privacy controls including PII tokenization and field-level encryption</li>
</ul>
<ul>
<li>Previous work in telecom, real-time analytics, or compliance-sensitive domains</li>
</ul>
<ul>
<li>Contributions to open-source Java or streaming projects demonstrating community engagement</li>
</ul>
<p>What We Offer</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more.</p>
<p>Offerings vary by location.</p>
<p>Twilio thinks big. Do you?</p>
<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things.</p>
<p>That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic.</p>
<p>Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>
<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>
<p>Twilio is proud to be an equal opportunity employer.</p>
<p>We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.</p>
<p>Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.</p>
<p>Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kafka Streams, Flink, ClickHouse, AWS EKS, Spring Boot, Linux, Docker, Kubernetes, DevSecOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7234666</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>119c9488-4eb</externalid>
      <Title>Software Engineer, Infrastructure (8+ YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>We currently have openings on: Base Infrastructure: We are looking for strong engineers with leadership experience to join the Serving Infrastructure organisation. You will primarily work on the Base Infrastructure team, whose key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling. Data Infrastructure: The Data Infrastructure team’s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Proactively identify and lead significant improvements to Airtable’s infrastructure, working across teams and product areas to maximise business and engineering impact. Work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. Build clean, reusable, and maintainable abstractions that will be used by Airtable’s engineers for years to come. Take full ownership of components of Airtable’s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p><strong>Who you are</strong></p>
<p>You have at least 8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area or New York City for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000-$339,900 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Apache Spark, Kafka, Apache Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400388002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9e76f9cf-4c8</externalid>
      <Title>Senior Software Engineer - Billing</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Department</p>
<p>Cloudflare’s Billing Engineering Team is at the heart of every product launch, campaign, and initiative that Cloudflare undertakes. We build and maintain critical systems for billing, payments, usage metering, aggregation, invoicing and revenue recognition , powering billions in revenue and serving millions of customers.</p>
<p>Currently we&#39;re rebuilding our entire billing platform, designing a metering and aggregation layer that scales effortlessly while ensuring financial accuracy. This is high-impact, high-stakes work that touches all Cloudflare’s cutting-edge products like AI, Zero Trust, Edge Compute, Bot Management, DDoS Protection, etc.</p>
<p>As a Senior Systems Engineer, you’ll lead a team of talented, collaborative engineers working across Cloudflare’s ecosystem. You’ll navigate multiple high-profile projects, foster a culture of proactive communication and continuous learning, and drive technical excellence.</p>
<p>If you thrive on solving hard challenges at the intersection of financial infrastructure and distributed systems, this is your opportunity to make a massive impact while growing with us.</p>
<p>What You’ll Do</p>
<p>We are looking for an energetic team-focused engineer who is growth mindset oriented, able to drive their work from inception, requirements definition, technical specification phase, development, testing and go live. You will work on a range of transactional microservices written in Go. You will be involved in helping to maintain our operational excellence by triaging, solving various inbound tickets related to issues across services billing maintains.</p>
<p>As you grow within the team you will be given opportunities to own bigger initiatives and lead projects from start to finish solo or as part of a smaller team.</p>
<p>Our Tech Stack</p>
<p>Modern container-based microservice architecture. Technologies we use include Docker, Go (golang), PostgreSQL, Redis, Kafka, Kubernetes, Temporal and the usual Unix/Linux tools and workflows.</p>
<p>We strive to build reliable, fault-tolerant systems that can operate at Cloudflare’s scale.</p>
<p>Desirable Skills and Knowledge</p>
<p>BS+ in Computer Science or equivalent experience</p>
<p>7+ years professional experience as a developer/engineer</p>
<p>Knowledge of Golang or desire to learn it</p>
<p>Solid understanding of RESTful APIs and service security</p>
<p>Working knowledge of SQL and relational databases such as PostgreSQL or MySQL</p>
<p>Experience with modern Unix/Linux development and runtime environment</p>
<p>Experience implementing secure and highly-available distributed systems/microservices</p>
<p>Familiarity with event-driven architecture</p>
<p>Experience with API tooling and standards (Swagger/OpenAPI, OAuth/JWT)</p>
<p>Strong interpersonal and communication skills with a bias towards action</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Docker, PostgreSQL, Redis, Kafka, Kubernetes, Temporal, Unix/Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7282689</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>99aa7ac0-2c6</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As the Senior Manager of Data Streaming Services, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\nResponsibilities:\n\n- Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n\n- Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n\n- Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n\n- Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n\n- Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\nWhat you&#39;ll bring:\n\n- Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n\n- Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n\n- Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n\n- Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\nBonus Points:\n\n- Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n\n- Experience with databases such as PostgreSQL and MongoDB.\n\n- Experience with distributed streaming platforms like Kafka.\n\n- Familiarity with concepts in the IAM (Identity and Access Management) domain.\n\n- Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n\n- Experience building reliable, high-availability platforms for enterprise SaaS applications.\n\nTo learn more about our Total Rewards program please visit: https://rewards.okta.com/us\n\nThe annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$266,000 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$194,000-$266,000 CAD</Salaryrange>
      <Skills>engineering leadership, team management, technical architecture, distributed systems, project management, agile development, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, IAM, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides a platform for authentication and authorization services.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7735781</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>93c1356c-a95</externalid>
      <Title>Principal Software Engineer, Web Data - Tech Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for an exceptional Principal Software Engineer to serve as the de facto Technical Lead for our Web Data Acquisition (WDA) team. This is a highly visible, hands-on technical leadership role where you&#39;ll own the architectural direction for crawling systems, evolve and unify crawling platforms into a best-in-class stack, and elevate a high-performing engineering team.</p>
<p>As a Principal Software Engineer, you&#39;ll solve complex distributed systems challenges, build modular tooling that accelerates delivery, and set the standard for observability and operational excellence. You&#39;ll have a dedicated manager handling all HR and administrative responsibilities. A product manager connects business needs with technical work. Your focus is 100% technical leadership, mentorship, and hands-on execution.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Technical Leadership &amp; System Design: Proven experience building web crawling or large-scale data systems from scratch. Strong architectural skills designing scalable, fault-tolerant distributed systems. Track record leading complex technical initiatives and driving architecture direction for teams.</li>
</ul>
<ul>
<li>Data Engineering Expertise: Deep background in large-scale data engineering (terabytes daily). Hands-on experience with cloud data warehouses (BigQuery, Snowflake). Experience with Apache Kafka, Kubernetes (GKE/EKS), and orchestration tools (Airflow).</li>
</ul>
<ul>
<li>Web Crawling &amp; Data Extraction: Deep expertise in web crawling technologies and advanced scraping (Scrapy or similar). Experience extracting structured/unstructured web data and SERP extraction. Knowledge of proxy infrastructure management, anti-bot detection, and ethical crawling.</li>
</ul>
<ul>
<li>Leadership &amp; Team Development: Experience mentoring engineers at all levels and fostering collaborative culture. Strong ability to influence technical direction and establish best practices. Track record hiring, coaching, and developing senior engineers.</li>
</ul>
<p>Ideal Candidate Profile:</p>
<ul>
<li>10+ years software engineering experience. 5+ years focused on data engineering. 3+ years in senior/principal-level technical leadership.</li>
</ul>
<ul>
<li>Strong CS fundamentals (algorithms, data structures, distributed systems). Self-starter who thrives in fast-paced environments.</li>
</ul>
<p>Core Technical Stack:</p>
<ul>
<li>Python &amp; Java</li>
<li>Apache Kafka</li>
<li>GCP (BigQuery, GKE, Vertex AI)</li>
<li>Snowflake &amp; Starburst/Trino</li>
<li>Terraform</li>
<li>Scrapy / Web Scraping Frameworks</li>
<li>Proxy Management Systems</li>
<li>Distributed Systems &amp; Kubernetes</li>
<li>Apache Airflow</li>
<li>Large-Scale ETL Pipelines</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Python, Java, Apache Kafka, Kubernetes, GCP, Snowflake, Terraform, Scrapy, Proxy Management Systems, Distributed Systems, Apache Airflow, Large-Scale ETL Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8378092002</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ef1d7d5-e0a</externalid>
      <Title>Member of Technical Staff - Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled engineer to join our small, high-impact Observability team. As a Member of Technical Staff, you&#39;ll design and implement scalable observability infrastructure for metrics, logging, and tracing. You&#39;ll build high-performance telemetry pipelines, develop APIs and query engines, and define best practices for instrumentation and alerting. Your work will enable engineering teams to operate services at scale, identify issues before they impact users, and drive systemic reliability improvements.</p>
<p>Our team operates with a flat organisational structure, and leadership is given to those who show initiative and consistently deliver excellence. We value strong communication skills, and all employees are expected to contribute directly to the company&#39;s mission.</p>
<p>You&#39;ll be working with a range of technologies, including Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, and ClickHouse. Experience with Kafka, Redis, and large-scale time series databases is also essential.</p>
<p>In this role, you&#39;ll own the reliability, scalability, and performance of the observability stack end-to-end. You&#39;ll partner with infrastructure and product teams to deeply integrate observability into our internal platforms.</p>
<p>We offer a competitive salary of $180,000 - $440,000 USD, plus equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, ClickHouse, Kafka, Redis, large-scale time series databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803905007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a0c7e0d-7b0</externalid>
      <Title>Senior Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p>The Streaming Foundations team builds services and operates data pipeline infrastructure to support event streaming, messaging, and analytics use cases.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Write maintainable, efficient code using proven patterns to solve complex problems</li>
<li>Lead the design and development of highly scalable services for data-intensive use cases</li>
<li>Evaluate and advocate for modern technologies to accelerate value delivery and improve engineering efficiency</li>
<li>Carry cross-team initiatives from end to end: code reviews, design reviews, operational robustness, security hygiene, etc</li>
<li>Participate in team’s on-call rotation to build operational excellence on services we support</li>
<li>Coach and mentor engineers to help scale up the engineering organisation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3-5 years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p>Annual base salary range for this position for candidates located in Canada is between $136,000-$187,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Golang, Java, Event-driven systems, Database fundamentals, Kafka, TypeScript, React, Cloud providers, Container technologies, Identity and Access Management (IAM)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630525</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>068d5a1f-5ca</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>Join the team as Twilio&#39;s next Software Engineer.</p>
<p>This position is needed to add to our Voice Connectivity Trust team to enable Twilio to better support our customers using Voice in their solutions.</p>
<p>As a Software Engineer on this team, you will participate in all phases of the software development life cycle, including requirements gathering with Product Managers, technical design, estimations, sprint planning, coding, testing, deployments, and on-call support.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Design and implement real-time services with high throughput and low latency requirements, verify, deploy, and operationalize them</li>
</ul>
<ul>
<li>Work closely with stakeholders to understand customer needs and devise and deliver simple, robust, and scalable solutions</li>
</ul>
<ul>
<li>Be comfortable expressing thoughts and ideas as detailed prose and use it as an effective means to collaborate with leads, architects, and cross-functional teams</li>
</ul>
<ul>
<li>Embrace the challenge of scaling a complex distributed platform with points of presence globally, each one concerned with high availability, high reliability, high throughput, low latency, and media fidelity</li>
</ul>
<ul>
<li>Figure out novel ways of solving customer problems for the Voice channel</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, RESTful services, API design, event-driven architectures, Kafka, SQS, CI/CD pipelines, cloud infrastructures, AWS, GCP, OpenStack, Azure, excellent written communication skills, strong Java fundamentals, architect, review, debug code, proven ability to critically evaluate AI-generated code, demonstrated proficiency working with AI coding assistants, on-call rotations, incident response, monitoring/alerting tools, Prometheus, Datadog, Grafana, experience scaling data tiers, SQL/NoSQL database and caching technologies, horizontally-scalable, resilient, performing-under-load systems, SIP protocol, Stir/Shaken protocol</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7747550</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9238107d-204</externalid>
      <Title>Software Architect, Reliability Engineering</Title>
      <Description><![CDATA[<p>Join the team as Twilio&#39;s next Reliability Architect.</p>
<p>As an Architect in SRE, you will drive the technical strategy, vision and outcomes for Twilio&#39;s Reliability Engineering organisation. You will define and lead solutions and initiatives that ensure Twilio products are reliable worldwide, and you will define standards and guide engineering teams on best practices for designing, building, and operating resilient systems.</p>
<p>This role is pivotal to Twilio&#39;s commitment to operational excellence, scalability, and pragmatic, large-scale systems design in the cloud.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with senior technical leaders across Twilio to set and communicate the reliability strategy, translating business goals into measurable outcomes.</li>
<li>Influence company-wide architectural decisions while balancing long-term vision with near-term and compliance needs.</li>
<li>Lead the design, implementation, and operation of scalable solutions and paved roads that enable reliable, high-traffic services;</li>
<li>Influence company-wide architectural decisions to focus on availability, performance, resilience, and cost efficiency using Kubernetes, AWS, Terraform, and modern observability.</li>
<li>Ensure integrity and quality across the service lifecycle; design fault-tolerant architectures, incident response, disaster recovery, and capacity/cost management.</li>
<li>Collaborate with product and cross-functional teams to identify reliability risks and convert them into actionable designs, programs, and tooling.</li>
<li>Establish and champion reliability practices and drive systemic improvements.</li>
<li>Mentor and grow engineers and technical leaders</li>
<li>Track and apply emerging SRE, cloud, and large-scale systems best practices; introduce pragmatic innovations that improve reliability at scale.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>15+ years of experience in Reliability Engineering, Software Engineering, DevOps roles with a focus on infrastructure, backend systems, and reliability, including as a principal/architect.</li>
<li>Strong experience in driving strategic technical decisions and defining long-term technical vision.</li>
<li>In-depth understanding of the role of Reliability Engineering in a large and diverse SaaS organisation.</li>
<li>Experience driving cross-org technical architecture outcomes.</li>
<li>Knowledge of cloud architecture, devops practices, and large-scale systems design with microservices.</li>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field (or equivalent experience).</li>
<li>Strong production experience, including operational management, scaling, partitioning strategies, and tuning for performance and reliability in high-scale environments.</li>
<li>Hands-on experience with Kubernetes (e.g., EKS), deploying and managing stateful services, and cloud services like AWS.</li>
<li>Proficiency in infrastructure-as-code tools such as Terraform or CloudFormation for automating infrastructure.</li>
<li>Expertise in observability tools (e.g., Prometheus, Grafana, Datadog) for monitoring distributed systems and setting up alerting.</li>
<li>Proficient in at least one programming language (e.g., Go, Python, Java) for building automation and tooling.</li>
<li>Experience designing incident response processes, SLOs/SLIs, runbooks, and participating in on-call rotations.</li>
<li>Experience running cross-functional post-incident reviews and driving improvements.</li>
<li>Strong understanding of distributed systems principles, including consensus, durability, throughput, and availability tradeoffs.</li>
<li>Proven track record of leading reliability improvements in data-intensive or mission-critical systems and collaborating with engineering teams.</li>
<li>Excellent problem-solving, analytical, verbal, and written communication skills, with the ability to work in cross-functional and distributed environments.</li>
<li>Demonstrated leadership in mentoring teams, influencing decisions, and balancing long-term objectives with short-term needs.</li>
<li>Ability to influence and build effective working relationships with all levels of the organisation.</li>
</ul>
<p>Desired:</p>
<ul>
<li>Specific experience owning and operating large AWS footprints.</li>
<li>Knowledge of Kubernetes architecture and concepts.</li>
<li>Experience with data technologies like Apache Kafka, AWS MSK, or similar for reliable streaming.</li>
<li>Passion for building reliable products, with prior projects in high-availability systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$227,840.00 - $284,800.00 per year</Salaryrange>
      <Skills>Reliability Engineering, Software Engineering, DevOps, Cloud Architecture, Microservices, Kubernetes, AWS, Terraform, Observability Tools, Programming Languages, Incident Response, Distributed Systems Principles, Apache Kafka, AWS MSK, Kubernetes Architecture, Data Technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a communications platform that provides cloud communication APIs for building, scaling, and operating real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7658259</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>00b6fe58-4df</externalid>
      <Title>Senior Software Engineer, Enterprise Readiness</Title>
      <Description><![CDATA[<p>About Us\n\nAt Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.\n\nAs a Senior Software Engineer on the team, you will build the foundational services that enable the world’s largest organisations to run on Cloudflare. You will be responsible for the APIs, UIs, internal tooling, and admin platforms that help manage complex enterprise logic at scale.\n\nMore specifically, there will be a heavy focus on scaling and extending Organisations - the new abstraction for our largest customers and partners to manage Cloudflare. While this is a full-stack role, our roadmap for the coming year is weighted toward backend architecture and systems design.\n\nYou will spend your time helping design our data models, architecting high-performance services in Go, optimising our PostgreSQL layer, and ensuring our services are resilient within our Kubernetes ecosystem.\n\nYou won&#39;t just ship features; you will also own the &quot;operational excellence&quot; of your services. You’ll use tools like Jaeger, Sentry, and Kibana to troubleshoot complex distributed traces and ensure our platform remains highly available for our external and internal customers.\n\nYou will also rapidly expand your domain knowledge and ability to deliver change through AI tooling. Cloudflare is ramping up its support and infrastructure for AI development tools like OpenCode. Which, connected to everything safely possible with MCPs, is enabling engineers to have greater impact, faster than ever.\n\nCore Technologies\n\n- Backend: Go, PostgreSQL, Redis, PHP\n- Infrastructure: Kubernetes, Docker, Kafka\n- Frontend: React, TypeScript\n- Observability: Kibana, Elasticsearch, Jaeger, Sentry\n\nExamples of desirable skills, knowledge, and experience\n\n- Senior-Level Backend Expertise: 5+ years of experience building and scaling production-grade applications.\n- Systems Architecture: Proven experience designing distributed systems that are scalable, maintainable, and fault-tolerant.\n- Pragmatic Full Stack Ability: While your work will be weighted toward the backend, you are comfortable navigating a React/TypeScript codebase to build or improve UI components.\n- Agentic AI Development: You are excited about exploring and adopting the rapidly advancing AI tooling in your workflows.\n- Databases: Experience with SQL, including schema design, query optimisation, and serving globally distributed actors.\n- Observability-First Mindset: You don&#39;t consider a feature &quot;done&quot; until it&#39;s monitored. Experience using distributed tracing (Jaeger), error tracking (Sentry), and log analysis (Kibana/Elasticsearch) to debug production issues.\n- Cloud &amp; Containers: Practical experience deploying and managing services in Kubernetes and Docker.\n- Operational Ownership: You are comfortable participating in an on-call rotation and feel a sense of pride in maintaining high-uptime services.\n\nCompensation\n\nCompensation may be adjusted depending on work location.\n\nFor Denver based hires: Estimated annual salary of $168,000-$231,000\n\nEquity\n\nThis role is eligible to participate in Cloudflare’s equity plan.\n\nBenefits\n\nCloudflare offers a complete package of benefits and programs to support you and your family.\n\nOur benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!\n\nThe below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.\n\nHealth &amp; Welfare Benefits\n\n- Medical/Rx Insurance\n- Dental Insurance\n- Vision Insurance\n- Flexible Spending Accounts\n- Commuter Spending Accounts\n- Fertility &amp; Family Forming Benefits\n- On-demand mental health support and Employee Assistance Program\n- Global Travel Medical Insurance\n\nFinancial Benefits\n\n- Short and Long Term Disability Insurance\n- Life &amp; Accident Insurance\n- 401(k) Retirement Savings Plan\n- Employee Stock Participation Plan\n\nTime Off\n\nFlexible paid time off covering vacation and sick leave\n\nLeave programs, including parental, pregnancy health, medical, and bereavement leave\n\nWhat Makes Cloudflare Special?\n\nWe’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.\n\nFundamental to our mission to help build a better Internet is protecting the free and open Internet.\n\nProject Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.\n\nAthenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.\n\nSince the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.\n\n1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.\n\nThis is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.\n\nHere’s the deal - we don’t store client IP addresses never, ever.\n\nWe will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.\n\nSound like something you’d like to be a part of? We’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, PostgreSQL, Redis, PHP, Kubernetes, Docker, Kafka, React, TypeScript, Kibana, Elasticsearch, Jaeger, Sentry, Senior-Level Backend Expertise, Systems Architecture, Pragmatic Full Stack Ability, Agentic AI Development, Databases, Observability-First Mindset, Cloud &amp; Containers, Operational Ownership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network of services to protect and accelerate internet applications. It handles about 10% of HTTP requests on the internet today.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7521014</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1044456b-79a</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>We are obsessed with enabling data teams to solve the world&#39;s toughest problems. As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>
<p>This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>You will be part of one of the following teams:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Scala, Java, Apache Spark, Apache Kafka, Cloud APIs (AWS, Azure, CloudFormation, Terraform), SQL, Software security, Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organisation that builds and runs the world&apos;s best data and AI infrastructure platform. It was founded in 2013 by the original creators of Apache Spark.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6779232002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33521936-dee</externalid>
      <Title>Software Engineer, Infrastructure (2-8 YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>Airtable&#39;s infrastructure is evolving to meet the needs of our fast-growing engineering org. We currently have openings on:</p>
<ul>
<li>Base Infrastructure: The Base Infrastructure team owns the system that powers the core of Airtable&#39;s product--serving Airtable bases. We are investing in the foundations of our homegrown in-memory database. Key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling.</li>
</ul>
<ul>
<li>Compute: The compute pod builds and manages our Kubernetes-based platform that supports every service at Airtable, including all new AI services such as vector databases, AI evals store, and document extraction and understanding services. We have a lot of exciting foundational work in our roadmap, such as overhauling our network stack and service discovery, to simplify service setup and strengthen security, region level disaster recovery, and bringing up compute platform from 0-&gt;1 in a new region, building custom Kubernetes operators for reliably managing some of our most critical workloads.</li>
</ul>
<ul>
<li>Data Infrastructure: The Data Infrastructure team&#39;s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse. This infrastructure is used by Airtable&#39;s data engineers and analysts, as well as product developers building features powered by business data. The team is focused on scaling to petabyte volume, enabling sub-second streaming, tightening data governance, and delivering cost-efficient ML-ready datasets to power Airtable&#39;s native AI products with fresh, high-quality signals.</li>
</ul>
<ul>
<li>Developer Platform: The Developer Platform team sits at the intersection of all engineering at Airtable, focusing on building the internal tooling, frameworks, and CI/CD systems that power our product teams. We strive to streamline developer workflows,from build and test cycles to production deployments,and foster a best-in-class developer experience.</li>
</ul>
<ul>
<li>Storage: The Storage team&#39;s mission is to accelerate product development at Airtable by providing scalable, reliable, and easy-to-use storage abstractions. We use RDS MySQL, DynamoDB, Redis, and TiDB. We&#39;re looking for folks interested in distributed systems and databases who are excited to work on business-critical, petabyte-scale storage systems.</li>
</ul>
<ul>
<li>Traffic: We are looking for founding members of our Traffic Engineering team. We recently formed a Traffic Infrastructure team to ensure that traffic across Airtable&#39;s network and routing infrastructure is managed in a reliable, flexible, and secure manner. This will support improved performance in our secondary regions (EU and Australia) as well as other customer-driven projects.</li>
</ul>
<p>You will own all aspects of building, running, and improving these systems, from the underlying infrastructure all the way to the developer-facing code abstractions.</p>
<p>You will proactively identify and lead significant improvements to Airtable&#39;s infrastructure, working across teams and product areas to maximise business and engineering impact. You will work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. You will build clean, reusable, and maintainable abstractions that will be used by Airtable&#39;s engineers for years to come. You will take full ownership of components of Airtable&#39;s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p>You have 2-8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$187,000-$260,000 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Kubernetes, Apache Spark, Kafka, Apache Flink, RDS MySQL, DynamoDB, Redis, TiDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://www.airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400373002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>114d6e6c-0d0</externalid>
      <Title>Staff Software Engineer (L4)</Title>
      <Description><![CDATA[<p>We&#39;re shaping the future of communications at Twilio, delivering innovative solutions to hundreds of thousands of businesses and empowering millions of developers worldwide to craft personalized customer experiences.</p>
<p>Join the team as our next Staff Software Engineer in the Enterprise AI Engineering team. Twilio is undergoing a major business transformation powered by Enterprise AI, supported by a dedicated engineering team building the foundations for a unified, secure, and scalable operating system across GTM functions (Sales, Support, Operations, etc.) as well as Internal non-GTM functions (Finance, HR, Legal, etc.).</p>
<p>In this role, you&#39;ll co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance. You will oversee the integration of complex React-based front-ends with backend modular services, ensuring a seamless UI experience.</p>
<p>As a Staff Software Engineer within Enterprise AI, you are the technical heartbeat of our products. Your role is to bridge the gap between bleeding-edge AI research and robust, full-stack production systems.</p>
<p>Responsibilities:</p>
<ul>
<li>Co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance.</li>
<li>Drive the development of sophisticated, stateful web applications.</li>
<li>Serve as developer leader in distributed systems, data technologies, with strong software engineering skills.</li>
<li>Drive technical innovation and research to stay at the forefront of emerging data technologies and best practices.</li>
<li>Mentor and elevate a team of high-performing engineers.</li>
<li>Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>8+ years of experience in data engineering, software development, or a related field, with at least 3 years in a technical leadership role.</li>
<li>Experience with full-stack development building web apps, using modern programming languages such as JavaScript, Typescript or React.</li>
<li>Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems.</li>
<li>Strong understanding of data modeling, data warehousing, and ETL processes, with experience designing and optimizing data pipelines.</li>
<li>Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams.</li>
<li>Strong leadership skills, with a track record of mentoring and developing high-performing engineering teams.</li>
<li>Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines.</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience developing production-quality LLM applications and using modern agent frameworks such as Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, and/or others is a plus.</li>
<li>Expertise in big data technologies such as Hadoop, Spark, Kafka, and cloud-based data services (AWS/GCP/Azure).</li>
</ul>
<p>Travel:</p>
<p>This role will be remote and based in Colombia. Travel may be required to participate in project or team in-person meetings.</p>
<p>What We Offer:</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>
<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>full-stack development, JavaScript, Typescript, React, data engineering, software development, distributed systems, data technologies, strong software engineering skills, technical innovation, research, emerging data technologies, best practices, mentorship, team leadership, communication, collaboration, influence, alignment, leadership skills, mentoring, high-performing engineering teams, fast-paced, dynamic environment, results under tight timelines, LLM applications, modern agent frameworks, Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, big data technologies, Hadoop, Spark, Kafka, cloud-based data services, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides APIs and services for businesses to build, scale, and operate real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7716279</Applyto>
      <Location>Remote - Colombia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21860f67-527</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritize, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark™, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>Some example teams you can join:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p>Competencies:</p>
<p>BS/MS/PhD in Computer Science, or a related field 10+ years of production-level experience in one of: Java, Scala, C++, or similar language Comfortable working towards a multi-year vision with incremental deliverables Experience in architecting, developing, deploying, and operating large-scale distributed systems Experience working on a SaaS platform or with Service-Oriented Architectures Good knowledge of SQL Experience with software security and systems that handle sensitive data Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes.</p>
<p>Pay Range Transparency: The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5408888002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a68e8bd-dd5</externalid>
      <Title>Consulting Architect - Observability</Title>
      <Description><![CDATA[<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You will translate business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack. You will lead end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation. You will partner with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</p>
<p>You will provide technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles. You will collaborate cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement. You will capture and share best practices, lessons learned, and solution patterns across the Elastic Services community.</p>
<p>You will guide customers in using Elastic Agents, Beats, Logstash time-series data ingestion, stream processing, and normalisation, and related technologies. You will design and implement custom dashboards, visualisations, and alerting for critical observability use cases in Kibana. You will optimise ingestion pipelines for performance, scalability, and resiliency at enterprise scale.</p>
<p>You will have 5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains. You will have strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash. You will have knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</p>
<p>You will have understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs. You will have experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code. You will have familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</p>
<p>You will have proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale. You will have hands-on expertise in distributed systems and large-scale infrastructure. You will have ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</p>
<p>You will have experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene. You will have strong communication and presentation skills, with experience engaging directly with customers. You will have a Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</p>
<p>You will be comfortable working in highly distributed teams, both remote and on-site when needed. You may require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$133,100-$210,600 USD</Salaryrange>
      <Skills>observability, monitoring, time-series data ingestion, processing, pipelines, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, ingestion optimisation strategies, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7763314</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>396fe53d-121</externalid>
      <Title>Consulting Architect - Observability</Title>
      <Description><![CDATA[<p>As a Consulting Architect – Observability, you will play a pivotal role in helping our customers realise the value of Elastic’s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic’s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Observability platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack.</li>
<li>Leading end-to-end delivery of customer engagements , from discovery and design through implementation, enablement, and optimisation.</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption.</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles.</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement.</li>
<li>Capturing and sharing best practices, lessons learned, and solution patterns across the Elastic Services community.</li>
<li>Contributing to internal enablement, mentoring, and a culture of continuous learning and collaboration</li>
</ul>
<p>Required skills include:</p>
<ul>
<li>5+ years as a consultant, architect, or engineer with expertise in observability, monitoring, or related domains.</li>
<li>Expertise in the Telecommunications domain, especially with Mobile networks and devices.</li>
<li>Strong experience with time-series data ingestion and processing, including pipelines with Elastic Agents, Beats, and Logstash.</li>
<li>Knowledge of messaging queues (Kafka, Redis) and ingestion optimisation strategies.</li>
<li>Understanding of observability concepts like distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs.</li>
<li>Experience with one or more: Kubernetes, cloud platforms (AWS, Azure, GCP), or infrastructure as code.</li>
<li>Familiarity with Elastic Common Schema (ECS), data parsing, and normalisation.</li>
<li>Proven experience deploying Elastic Observability (APM, UEM, logs, metrics, infra, network monitoring) or similar solutions at enterprise scale.</li>
<li>Hands-on expertise in distributed systems and large-scale infrastructure.</li>
<li>Ability to design and build dashboards, visualisations, and alerting thresholds that drive actionable insights.</li>
<li>Experience with Kubernetes, Linux, Java, databases, Docker, AWS/Azure/GCP, VMs, Lucene.</li>
<li>Strong communication and presentation skills, with experience engaging directly with customers.</li>
<li>Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or related field, or equivalent experience.</li>
<li>Comfortable working in highly distributed teams, both remote and on-site when needed.</li>
<li>May require significant travel to customer sites to support engagements and solution implementations; candidates should be comfortable with varying levels of travel based on business needs.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>observability, monitoring, Elastic Stack, time-series data ingestion, Elastic Agents, Beats, Logstash, messaging queues, Kafka, Redis, distributed tracing, metrics pipelines, log aggregation, anomaly detection, SLOs/SLIs, Kubernetes, cloud platforms, infrastructure as code, Elastic Common Schema, data parsing, normalisation, databases, Docker, VMs, Lucene</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that enables everyone to find the answers they need in real time, using all their data, at scale. The company&apos;s products are used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7440232</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b657c4e-8a1</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute. You will take ownership of infrastructure components that process trillions of events daily, driving the scalability, performance, and reliability of the systems that power product and ML workloads across the company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimize multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimize distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimization skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, stream processing, large-scale data platforms, Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, debugging, profiling, performance optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed877cf7-715</externalid>
      <Title>Member of Technical Staff - X Money, Fraud and Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for an exceptional Software Engineer to focus on Fraud Engineering for a new payments platform serving 600 million+ monthly users. This high-priority role is responsible for protecting users and the platform from fraud, abuse, and risk. You&#39;ll play a key role in designing and implementing systems to detect, prevent, and mitigate fraud in real time,at scale.</p>
<p>Your work will be at the intersection of security, distributed systems, and product engineering, helping build trusted payments infrastructure from the ground up.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement fraud detection and prevention systems that operate at global scale and low latency</li>
<li>Develop risk scoring engines, anomaly detection pipelines, and real-time enforcement mechanisms</li>
<li>Collaborate with product, compliance, X Money, Fraud Prevention, and infrastructure teams to ensure a secure and seamless user experience</li>
<li>Monitor and analyze fraud trends, and proactively respond to new attack vectors</li>
<li>Define engineering standards around observability, reliability, and rapid response in fraud-related systems</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>7+ years of backend or systems engineering, with exposure to fraud, risk, or abuse prevention systems preferred</li>
<li>Skilled in distributed systems: You&#39;ve built resilient, high-throughput systems that operate under real-time constraints</li>
<li>Security-conscious: You understand threat models, data sensitivity, and defense-in-depth principles</li>
<li>Analytical and pragmatic: You value simple, high-leverage solutions and adapt quickly to evolving challenges</li>
<li>Builder mentality: You&#39;re excited by zero-to-one problems and proven ability to thrive in fast-paced environments. You are willing to work hard.</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Experience with real-time anomaly detection, machine learning for fraud, or rule-based risk systems</li>
<li>Familiarity with AML/KYC regulations, chargeback flows, or identity verification systems</li>
<li>Experience in fintech, trust &amp; safety, or adversarial system design</li>
<li>Comfortable working in a zero-to-one environment with rapid iteration</li>
<li>Experience with: Golang, Postgres, Kafka, Memcached</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>backend or systems engineering, fraud, risk, or abuse prevention systems, distributed systems, security-conscious, analytical and pragmatic, builder mentality, Golang, Postgres, Kafka, Memcached, real-time anomaly detection, machine learning for fraud, rule-based risk systems, AML/KYC regulations, chargeback flows, identity verification systems, fintech, trust &amp; safety, adversarial system design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The company has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4758524007</Applyto>
      <Location>New York, NY; Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4df3e714-829</externalid>
      <Title>Sr. Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled and motivated Sr. Software Engineer to join our team. As a Sr. Software Engineer, you will be responsible for developing and maintaining our Payments services, including Card Attributes, Webhooks, and Event Pipeline. You will collaborate with cross-functional teams to design, build, and optimize high-throughput, fault-tolerant services within the VGS platform.</p>
<p>Your responsibilities will include engaging in all phases of the software lifecycle - design, implement, test, deploy, and support services in production. You will maintain a culture of code quality through rigorous testing, automation, and code reviews. You will also be proactive and innovative, relying on your feedback to build a world-class product.</p>
<p>We are looking for a candidate with deep hands-on expertise in Java and the Spring Framework, strong practical experience working with Kafka, and solid understanding and hands-on experience working with cloud-native architecture, microservices, CI/CD, GitOps, APIs, and API Gateway. You should also have strong experience implementing and leveraging Observability solutions, strong written and verbal communication skills, and bonus points if you have familiarity with the payment processing ecosystem.</p>
<p>In addition to a competitive salary, you will receive flexible work hours, flexible PTO, competitive health benefits, VGS stock options, 401k plan with employer matching, life and disability insurance, pre-tax flexible spending accounts, global parental leave program, employee assistance program, home internet reimbursement, new hire home office set-up allowance, and professional learning reimbursement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Framework, Kafka, Cloud-native architecture, Microservices, CI/CD, GitOps, APIs, API Gateway, Observability solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>VGS</Employername>
      <Employerlogo>https://logos.yubhub.co/vgs.io.png</Employerlogo>
      <Employerdescription>VGS is the world&apos;s leader in payment tokenization, providing processor-agnostic tokenization solutions to large banks, fintechs, and merchants.</Employerdescription>
      <Employerwebsite>https://www.vgs.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/verygoodsecurity/21eae4be-c4cb-48d3-9e08-c3923f3cf081</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5dd5f58c-c07</externalid>
      <Title>Principal Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a well-versed Principal Engineer to play a key role in architecting and building highly available, reliable, and scalable payments applications. Collaborate with Payments Engineering teams to design, develop, and champion best-practices, patterns, and standards for all payments applications. Work closely with our CTO and other architects to create holistic technology solutions for our customers.</p>
<p>As a Principal Engineer, you will:</p>
<ul>
<li>Collaborate and communicate with Payments Engineering teams to design, develop, and champion best-practices, patterns, and standards for all payments applications.</li>
<li>Work closely with our CTO and other architects to create holistic technology solutions for our customers.</li>
<li>Be part of the Tech Leads group, driving measurable outcomes and iterative delivery strategy, removing roadblocks, empowering others, and mentoring high-potential engineers.</li>
<li>Produce clear, detailed, and actionable design documents, architecture blueprints, architectural decisions with context, decision, and tradeoffs.</li>
<li>Be involved in hands-on development of proof-of-concepts, prototypes, and real production-ready code.</li>
<li>Mentor engineers on architecture best practices and standards.</li>
<li>Engage in all phases of the software lifecycle - design, implement, test, deploy, and support services in production.</li>
<li>Maintain a culture of code quality through rigorous testing, automation, and code reviews.</li>
<li>Be proactive and innovative - we rely on your feedback to build a world-class product.</li>
</ul>
<p>We&#39;re seeking individuals with an equal flair for creative problem-solving, enthusiasm for new technologies, and a desire to contribute to our product. You will likely be successful in this role if you identify with the following traits: attention to detail, problem solver, customer-oriented, versatile, resilient, and confident.</p>
<p>If all of this sounds interesting to you, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud SaaS environment, Highly available, reliable, and scalable SaaS applications/platforms, Backend API specs, mocks, and service implementations, Cloud-native architecture, microservices, CI/CD (GitHub Actions, Argo), GitOps, Authentication and Authorization, APIs and API Gateway, Docker, Kubernetes (EKS), Kafka (MSK), Java, Spring Framework, Python, and AWS services, Observability solutions using Grafana and Open Telemetry, DevOps, SRE, Configuration Management, and Release Management, Payments technologies and ecosystem (card networks, PSP integration)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>VGS</Employername>
      <Employerlogo>https://logos.yubhub.co/vgs.com.png</Employerlogo>
      <Employerdescription>VGS is the world&apos;s leader in payment tokenization, providing processor-agnostic tokenization solutions to large banks, fintechs, and merchants.</Employerdescription>
      <Employerwebsite>https://www.vgs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/verygoodsecurity/33e033b6-ae9b-4d51-b190-262a2cb83d96</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4f82620c-fd7</externalid>
      <Title>Software Infrastructure Manager</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Rigetti is seeking a Software Infrastructure Manager to lead the team responsible for building and operating the software infrastructure that powers our quantum computing platform.</p>
<p>You will sit at the intersection of cloud systems, on-prem laboratory infrastructure, and semiconductor fabrication environments, supporting both internal research systems and the infrastructure behind Rigetti’s quantum cloud platform.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Infrastructure Leadership &amp; Team Management</strong></p>
<ul>
<li>Provide leadership and cohesion for the infrastructure engineering team, helping align priorities and ownership across the group</li>
<li>Manage and support the development of infrastructure engineers while fostering a collaborative and sustainable team culture</li>
<li>Increase organisational resilience by reducing reliance on individual experts and building stronger shared knowledge across the team</li>
<li>Support hiring and help build the case for expanding the infrastructure team as the company grows</li>
</ul>
<p><strong>Infrastructure Platform Development</strong></p>
<ul>
<li>Lead the design, development, and maintenance of the infrastructure supporting Rigetti’s quantum computing platform, including AWS-based cloud systems, Kubernetes-orchestrated container workloads, CI/CD pipelines (GitLab CI), infrastructure automation (Terraform and Ansible), and backend services built primarily in Python/Flask.</li>
<li>Support data infrastructure including Postgres databases and Kafka streaming pipelines</li>
<li>Maintain hybrid infrastructure integrating cloud systems with on-prem laboratory and fabrication environments</li>
<li>Improve internal developer tooling and infrastructure usability to increase engineering productivity across teams</li>
<li>Guide architecture discussions around hybrid cloud and on-prem infrastructure supporting both research environments and external deployments</li>
</ul>
<p><strong>Infrastructure Maturity &amp; Operational Excellence</strong></p>
<ul>
<li>Help mature Rigetti’s infrastructure capabilities as the company scales by introducing more robust operational practices and processes</li>
<li>Improve system resilience, reliability, and documentation for production environments</li>
<li>Identify opportunities to simplify overly complex infrastructure patterns and improve operational efficiency, including examining/improving cost structure</li>
</ul>
<p><strong>Cross-Functional Infrastructure Support</strong></p>
<ul>
<li>Work closely with engineering leadership and cross-functional teams including software engineers, quantum engineers, and hardware teams</li>
<li>Support the company’s transition toward external hardware deployments, requiring infrastructure that operates across both cloud and on-prem environments</li>
<li>Integrate infrastructure systems with lab devices, fabrication environments, and specialised hardware platforms</li>
</ul>
<p><strong>Vendor &amp; Networking Coordination</strong></p>
<ul>
<li>Manage relationships with infrastructure and networking vendors</li>
<li>Evaluate long-term networking strategy and opportunities to bring additional networking capabilities in-house</li>
<li>Coordinate with contractors and partners supporting infrastructure and networking systems</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience leading or managing Infrastructure, DevOps, or SRE teams</li>
<li>Experience building infrastructure in startup or rapidly evolving environments</li>
<li>Strong hands-on experience with modern infrastructure stacks including:</li>
<li>AWS or other major cloud providers</li>
<li>Kubernetes and containerised workloads</li>
<li>Infrastructure automation tools (Terraform, Ansible, etc.)</li>
<li>Experience supporting production distributed systems</li>
<li>Strong communication skills and the ability to collaborate across highly technical teams</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>As engineering leaders, we value diversity and are committed to building a culture of inclusion to attract and engage innovative thinkers. Our technology, meant to serve all of humanity, cannot succeed if those who built it do not mirror the diversity of the communities we serve. Applications from women, minorities, and other under-represented groups are encouraged.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, Kubernetes, Terraform, Ansible, Python, Flask, Postgres, Kafka, GitLab CI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Rigetti Computing</Employername>
      <Employerlogo>https://logos.yubhub.co/rigetti.com.png</Employerlogo>
      <Employerdescription>Rigetti Computing is a pioneer in full-stack quantum computing. The company operates quantum computers over the cloud and serves global enterprise, government, and research clients.</Employerdescription>
      <Employerwebsite>https://www.rigetti.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/rigetti/3e4cacb0-5870-4b37-9737-c69c4cdef8cf</Applyto>
      <Location>Berkeley</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>886118d3-6a1</externalid>
      <Title>Senior Data Engineer - Data Engineering</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights-based products. Making data-driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data.</p>
<p>Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows.</p>
<p>We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid&#39;s data strategy and a data-first mindset.</p>
<p>Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team.</p>
<p>We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them.</p>
<p>You will be in a high-impact role that will directly enable business leaders to make faster and more informed business judgments based on the datasets you build.</p>
<p>You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on.</p>
<p>You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team.</p>
<p>You will collaborate with and have strong and cross-functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles.</li>
</ul>
<ul>
<li>Have data quality and performance top of mind while designing datasets</li>
</ul>
<ul>
<li>Leading key data engineering projects that drive collaboration across the company.</li>
</ul>
<ul>
<li>Advocating for adopting industry tools and practices at the right time</li>
</ul>
<ul>
<li>Owning core SQL and python data pipelines that power our data lake and data warehouse.</li>
</ul>
<ul>
<li>Well-documented data with defined dataset quality, uptime, and usefulness.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale.</li>
</ul>
<ul>
<li>You&#39;ve have experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)</li>
</ul>
<ul>
<li>You value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow.</li>
</ul>
<ul>
<li>You have experience working with different performant warehouses and data lakes; Redshift, Snowflake, Databricks.</li>
</ul>
<ul>
<li>You have experience building and maintaining batch and real-time pipelines using technologies like Spark, Kafka.</li>
</ul>
<ul>
<li>You appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data.</li>
</ul>
<ul>
<li>You are excited to try out new technologies. You like to produce proof-of-concepts that balance technical advancement and user experience and adoption.</li>
</ul>
<ul>
<li>You like to get deep in the weeds to manage, deploy, and improve low-level data infrastructure.</li>
</ul>
<ul>
<li>You are empathetic working with stakeholders. You listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs.</li>
</ul>
<ul>
<li>You are a champion for data privacy and integrity, and always act in the best interest of consumers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p>We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p>We are always looking for team members that will bring something unique to Plaid!</p>
<p>Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws.</p>
<p>Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>SQL, Python, DBT, Airflow, Redshift, ElasticSearch, Atlanta, Retool, Spark, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and services for developers to connect financial accounts to applications and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/022278b3-0943-44b3-a54b-1de421017589</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e503559e-cf7</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>
<p><strong>Job Description:</strong></p>
<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>
<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>
<p><strong>About Us:</strong></p>
<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>
<p><strong>Job Responsibilities:</strong></p>
<p>As part of this role, you will:</p>
<ul>
<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>
<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>
<li>Implement testing, observability, alerting, and disaster recovery for all services</li>
<li>Implement tracing, performance, and regression testing</li>
<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<p>The ideal candidate for the role has:</p>
<ul>
<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>
<li>Expertise with:</li>
</ul>
<ul>
<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>
<li>SQL, dbt, Python</li>
<li>OLAP / OLTP data modelling and architecture</li>
<li>Key-value stores: Redis, dynamoDB, or equivalent</li>
<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>
<li>API frameworks: FastAPI, Flask, etc.</li>
<li>Production ML Service experience</li>
<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>
</ul>
<p><strong>Total Rewards Package:</strong></p>
<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p><strong>Salary Range:</strong></p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $200,700 - $250,900</li>
<li>Canadian employees (any location): CAD 189,700 - 237,100</li>
</ul>
<p><strong>Diversity &amp; Belonging:</strong></p>
<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)</Salaryrange>
      <Skills>Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5639559004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd212aea-514</externalid>
      <Title>Backend Engineer, Agents</Title>
      <Description><![CDATA[<p>Hebbia is seeking a skilled Backend Engineer to join its Agents team. As a Backend Engineer, you will be responsible for building highly efficient software solutions that leverage the latest software and agentic solutions. You will integrate product experience with powerful distributed systems, protecting Hebbia&#39;s technical edge via elegant software design, efficient data communication, and sophisticated integrations.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on backend software engineering.</li>
<li>Proficiency in building backend and API systems using technologies such as Python, Java, or Go.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams.</li>
<li>Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Embraces rapid prototyping with an emphasis on user feedback.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<p>Bonuses:</p>
<ul>
<li>Experience building agentic systems or LLM enabled products.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<p>Compensation:
The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate&#39;s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, AWS, Kafka, ElasticSearch, PostgreSQL, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for leading asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584766005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2cf203a5-5c5</externalid>
      <Title>Platform Engineer, Document Intelligence</Title>
      <Description><![CDATA[<p>About Hebbia</p>
<hr>
<p>The AI platform for investors and bankers that generates alpha and drives upside.</p>
<p>Founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz, Hebbia powers investment decisions for BlackRock, KKR, Carlyle, Centerview, and 40% of the world’s largest asset managers. Our flagship product, Matrix, delivers industry-leading accuracy, speed, and transparency in AI-driven analysis. It is trusted to help manage over $30 trillion in assets globally.</p>
<p>We deliver the intelligence that gives finance professionals a definitive edge. Our AI uncovers signals no human could see, surfaces hidden opportunities, and accelerates decisions with unmatched speed and conviction. We do not just streamline workflows. We transform how capital is deployed, how risk is managed, and how value is created across markets.</p>
<p>Hebbia is not a tool. Hebbia is the competitive advantage that drives performance, alpha, and market leadership.</p>
<hr>
<p>The Team</p>
<hr>
<p>The Document Intelligence team at Hebbia builds cutting-edge AI solutions that transform how users discover and interact with billions of private and public documents. Our products, including the Hebbia’s Browse application, enable intelligent document exploration, powerful search capabilities, and deep insights extraction. We focus on developing advanced data ingestion and search technologies that deliver intuitive, explainable, and highly responsive experiences. Working closely with customers, our team continuously iterates to address real-world challenges and drive impactful, data-driven decisions. Our goal is to empower users by seamlessly turning vast and complex document repositories into actionable intelligence.</p>
<hr>
<p>The Role</p>
<hr>
<p>Platform engineering at Hebbia is about excellent, scalable enablement. You are responsible for the core distributed systems that power billions of tokens across millions of dollars of AUM. You will be responsible for deploying efficient systems and building software tightly coupled with state-of-the-art infrastructure/system design. Hebbia’s edge is built on operating on the edge of the tokenomics curve and you will serve as a key contributor in this area. We value engineers who think on their feet, innovate and can solve for exponential scale.</p>
<hr>
<p>Responsibilities</p>
<hr>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<hr>
<p>Who You Are</p>
<hr>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field. A strong academic background with coursework in data structures, algorithms, and software development is preferred.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on distributed systems and platform engineering.</li>
<li>Proficiency in building backend and distributed systems using technologies such as Python, Java, or Go.</li>
<li>Deep understanding of scalable system design, performance optimization, and resilience engineering.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Knowledge of workflow orchestration and execution platforms like Airflow, Temporal or Prefect.</li>
<li>Proven experience enabling observability patterns.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams. Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<hr>
<p>Bonuses:</p>
<ul>
<li>Experience building distributed systems leveraging technologies such as etcd or Apache Zookeeper.</li>
<li>Frequent user of AI products, especially during the development lifecycle (i.e. Cursor, Claude Code, etc).</li>
</ul>
<hr>
<p>Compensation</p>
<hr>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>
<hr>
<p>Life @ Hebbia</p>
<hr>
<ul>
<li>PTO: Unlimited</li>
<li>Insurance: Medical + Dental + Vision + 401K</li>
<li>Eats: Catered lunch daily + doordash dinner credit if you ever need to stay late</li>
<li>Parental leave policy: 3 months non-birthing parent, 4 months for birthing parent</li>
<li>Fertility benefits: $15k lifetime benefit</li>
<li>New hire equity grant: competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>backend and distributed systems, Python, Java, Go, scalable system design, performance optimization, resilience engineering, cloud platforms, AWS, Kafka, ElasticSearch, PostgreSQL, Redis, workflow orchestration and execution platforms, Airflow, Temporal, Prefect, observability patterns, etcd, Apache Zookeeper, AI products, Cursor, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz, and powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584750005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>aebaacf5-640</externalid>
      <Title>Integrations Engineer</Title>
      <Description><![CDATA[<p>You will own the full lifecycle of integrations that power Hebbia&#39;s AI , from designing connectors to deploying them in production, monitoring their behavior, and debugging failures in real time.</p>
<p>You&#39;ll work across systems like Snowflake, S3, SharePoint, and internal customer infrastructure , building pipelines that need to handle real-world complexity: unreliable APIs, evolving schemas, massive datasets, and edge cases that don’t show up in documentation.</p>
<p>This role is hands-on, high-ownership, and deeply technical. You won’t just write code , you’ll develop the instincts to operate and debug complex distributed systems in production.</p>
<p>You will build connectors and ingestion pipelines that bring enterprise data into Hebbia&#39;s AI platform, from Snowflake warehouses and SharePoint libraries to live pricing feeds, high-velocity news data, and proprietary customer systems.</p>
<p>You will design and operate pipelines that handle scale, failures, and edge cases gracefully.</p>
<p>You will debug issues across APIs, auth systems, and data formats, often under real-time customer pressure.</p>
<p>You will own reliability end-to-end: monitoring, alerting, on-call, and incident response.</p>
<p>You will improve internal tooling and observability to make systems more robust and easier to operate.</p>
<p>You will partner with product and customer teams to scope, prioritize, and ship the integrations that unlock Hebbia&#39;s highest-value use cases.</p>
<p>You will design and ship agents that sit on top of the ingestion layer, making enterprise data accessible and actionable across all of Hebbia&#39;s product surfaces , from document analysis to structured query workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $265,000</Salaryrange>
      <Skills>Python, APIs, OAuth flows, webhook patterns, rate limiting, pagination, cloud infrastructure, AWS, Kafka, PostgreSQL, Redis, ElasticSearch, enterprise data platforms, document processing pipelines, content extraction systems, agentic systems, LLM-enabled products, AI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, founded in 2020 by George Sivulka and backed by Peter Thiel and Andreessen Horowitz.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4675784005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1e388b24-397</externalid>
      <Title>Backend Engineer, Growth and Data</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Backend Software Engineer to join our Growth and Data team. As a key member of our team, you will build and maintain powerful backend systems that drive user engagement and fuel our continued expansion. Your role involves architecting and implementing robust APIs, services, and infrastructure that empower customers with tailored, high-value experiences.</p>
<p>Your responsibilities will include owning critical system components, unlocking O(1) universal indexing, driving performance optimization, and mentoring and guiding junior engineers. You will also collaborate closely with product teams, designers, and frontend engineers to take ownership of core backend features from initial design through deployment.</p>
<p>To succeed in this role, you will need a Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field, and 5+ years of software development experience at a venture-backed startup or top technology firm. You should be proficient in building backend and API systems using technologies such as Python, Java, or Go, and have extensive experience with cloud platforms (e.g., AWS).</p>
<p>You will also need working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis, and the ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, Cloud platforms (e.g., AWS), Kafka, ElasticSearch, PostgreSQL, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside, backed by Peter Thiel and Andreessen Horowitz.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584761005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b26de846-225</externalid>
      <Title>Backend Engineer, Agent Collaboration Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Backend Engineer to join our Agent Collaboration platform team. As a Backend Engineer at Hebbia, you will blend expertise in systems, application layer software, and data modeling to build highly efficient software solutions. You will be responsible for leveraging the latest software and agentic solutions and integrating product experience with powerful distributed systems. Your key responsibilities will include:</p>
<ul>
<li>Own critical system components: Take complex requirements and turn them into robust, scaled solutions that solve real customer needs.</li>
<li>Unlock O(1) universal indexing: Build and iterate on our high-scale document build system that enables constant time latency for indexing any content in the world, regardless of data volume.</li>
<li>Drive performance optimization: Architect and implement performance-tuning solutions to ensure our systems operate efficiently at scale, minimizing latency and maximizing throughput across millions of documents.</li>
<li>Mentor and guide: Provide technical leadership, mentorship, and guidance to junior engineers, fostering a culture of learning and growth.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Statistics, or a related field.</li>
<li>5+ years software development experience at a venture-backed startup or top technology firm, with a focus on backend software engineering.</li>
<li>Proficiency in building backend and API systems using technologies such as Python, Java, or Go.</li>
<li>Extensive experience with cloud platforms (e.g., AWS).</li>
<li>Working experience with one or more of the following: Kafka, ElasticSearch, PostgreSQL, and/or Redis.</li>
<li>Ability to analyze complex problems, propose innovative solutions, and effectively communicate technical concepts to both technical and non-technical stakeholders.</li>
<li>Proven experience in leading software development projects and collaborating with cross-functional teams.</li>
<li>Strong interpersonal and communication skills to foster a collaborative and inclusive work environment.</li>
<li>Enthusiasm for continuous learning and professional growth. A passion for exploring new technologies, frameworks, and software development methodologies.</li>
<li>Embraces rapid prototyping with an emphasis on user feedback.</li>
<li>Autonomous and excited about taking ownership over major initiatives.</li>
</ul>
<p>As a bonus, experience building agentic systems or LLM-enabled products, frequent use of AI products, especially during the development lifecycle, will be highly valued.</p>
<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate&#39;s experience and qualifications.</p>
<p>At Hebbia, we offer a range of benefits, including:</p>
<ul>
<li>Unlimited PTO</li>
<li>Medical, dental, and vision insurance</li>
<li>401K plan</li>
<li>Catered lunch daily</li>
<li>DoorDash dinner credit if you ever need to stay late</li>
<li>3 months non-birthing parent leave, 4 months for birthing parent</li>
<li>$15k lifetime fertility benefit</li>
<li>Competitive equity package with unmatched upside potential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $300,000</Salaryrange>
      <Skills>Python, Java, Go, Cloud platforms (e.g., AWS), Kafka, ElasticSearch, PostgreSQL, Redis, Agentic systems, LLM-enabled products, AI products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for top asset managers and manages over $30 trillion in assets globally.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4584764005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>be821069-a7f</externalid>
      <Title>Asset Data Engineer</Title>
      <Description><![CDATA[<p>Join the Asset Data team and build the streaming data infrastructure that powers Anchorage&#39;s digital asset platform. You&#39;ll design systems that ingest real-time blockchain and market data from diverse providers, transforming raw feeds into certified, trusted data products.</p>
<p>We&#39;re creating contract-governed supply chains that let us onboard new assets and providers quickly while maintaining the low-latency, high-availability SLOs our business depends on.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build streaming data pipelines for blockchain data (onchain transactions, staking rewards, validator info) and market data (prices, trades, order books)</li>
<li>Design and implement data contracts and validation gates that enforce quality and schema compliance at ingestion points</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Collaborate on designing the architecture for standardized ingestion patterns that enable rapid onboarding of new blockchains and market data feeds</li>
<li>Establish redundancy and failover patterns to meet Tier 1 availability and freshness SLOs for critical data products</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Collaborate with Protocols, Trading, and Custody teams to understand their data needs and design certified data products with clear SLAs</li>
<li>Partner with Data Platform team on orchestration, storage patterns (BigLake), and metadata management (Atlan)</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Advocate for contract-governed data supply chains and help establish engineering standards for producer patterns across the org</li>
<li>Contribute to architectural decisions and help mature the team&#39;s practices around observability, testing, and operational excellence</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5-7+ years building streaming or high-throughput data systems: You have experience designing and operating production data pipelines that handle large volumes with low latency and high reliability</li>
<li>Solid backend engineering skills: You&#39;re proficient in Go or Python and have built services that interact with streaming infrastructure (Kafka, pub/sub, websockets, REST APIs)</li>
<li>Blockchain data familiarity: You understand blockchain concepts and are comfortable working with on-chain data (transactions, events, staking, validators) across multiple chains with different data models</li>
<li>Data engineering adjacent skills: You&#39;re comfortable with data transformation patterns, schema evolution, and working with cloud data warehouses (BigQuery) and storage systems (GCS, BigLake)</li>
<li>Operational mindset: You have experience deploying and operating services on cloud platforms (preferably GCP), with strong practices around monitoring, alerting, and incident response</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Staking data expertise: You&#39;ve worked with staking rewards, validator data, or proof-of-stake blockchain infrastructure</li>
<li>Market data systems: You&#39;ve built systems that ingest and process market data (prices, trades, order books) from exchanges or data vendors</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Kafka, pub/sub, websockets, REST APIs, blockchain data, data transformation patterns, schema evolution, cloud data warehouses, storage systems, stake data expertise, market data systems, infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/82139746-fb0e-44b9-bbb6-ae078e5d251a</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9cdc0a4d-95f</externalid>
      <Title>Staff Software Engineer, Stream Compute</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Stream Compute team at Stripe. As a key member of this team, you will help define and deliver the next generation of Stripe&#39;s Flink-first stream compute infrastructure. This is a unique opportunity to work on some of the hardest problems in operating Flink in production, such as state management, exactly-once processing, performance isolation, and automated recovery.</p>
<p>Your primary responsibilities will include designing, building, and operating stream compute infrastructure with Apache Flink at the center, partnering with product and platform teams across Stripe to understand requirements, unblocking Flink adoption, and improving how stream processing infrastructure is used end-to-end. You will also define and implement operational best practices to improve resilience and reliability at scale, drive fleet-level automation and standardization, and lead initiatives that raise the bar on Flink availability and state durability.</p>
<p>To succeed in this role, you should have experience as a technical lead for team(s) working on distributed systems, including scaling them in fast-moving environments. You should also have hands-on experience with big data technologies such as Flink, Spark, Kafka, Pulsar, or Pinot, and experience developing, maintaining, and debugging distributed systems built with open source tools. Additionally, you should have strong software engineering skills and a passion for Big Data Distributed Systems, as well as the ability to write high-quality code in programming languages like Go, Java, Scala, etc.</p>
<p>If you&#39;re interested in joining our team and contributing to the development of our stream compute infrastructure, please don&#39;t hesitate to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Flink, Kafka, Temporal, AWS services, Distributed systems, Big data technologies, Software engineering, Go, Java, Scala, Streaming infrastructure, Real-time processing frameworks, Control planes, Open source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7767063</Applyto>
      <Location>San Francisco, Seattle, New York, Toronto</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>3367a9d1-967</externalid>
      <Title>Engineering Manager , Data Engineering Solutions</Title>
      <Description><![CDATA[<p>We&#39;re looking for a manager to drive the Data Engineering Solutions Team in solving high-impact, cutting-edge data problems. The ideal candidate will be someone that has built data pipelines for large scale volume, is deeply knowledgeable of Data Engineering tools including Airflow/Spark/Kafka/Flink, is empathetic, excels at building strong relationships, and collaborates effectively with other Stripe teams to understand their use cases and unlock new capabilities.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Deliver cutting-edge data pipelines that scale to users&#39; needs, focusing on reliability and efficiency.</li>
<li>Lead and manage a team of ambitious, talented engineers, providing mentorship, guidance, and support to ensure their success.</li>
<li>Drive the execution of key reporting initiatives for Stripe, overseeing the entire development lifecycle from planning to delivery while maintaining high standards of quality and timely completion.</li>
<li>Collaborate with product managers and key leaders across the company to create a shared roadmap and drive adoption of canonical datasets and data warehouses, use golden paths, and ensure Stripes are using trustworthy data.</li>
<li>Understand user needs and pain points to prioritize engineering work and deliver high-quality solutions that meet user needs.</li>
<li>Provide hands-on technical leadership in architecture/design, vision/direction/requirements setting, and incident response processes for your reports.</li>
<li>Foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.</li>
<li>Partner with our recruiting team to attract and hire top talent, and define the overall hiring strategies for your team.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Spark, Kafka, Flink, Data Engineering, Team Management, Leadership, Communication, Problem-Solving, Iceberg, Change Data Capture, Hive Metastore, Pinot, Trino, AWS Cloud</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7496118</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>0b1fb5b7-d63</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Data Platform Engineer to join our team. As a Data Platform Engineer, you will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.</p>
<p>Key responsibilities include:
Building for Scale: You will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.
Mastering the Orchestration: You’ll dive deep into Kubernetes, leveraging Operators and Helm to automate complex data workflows and platform management. Building out kube native data and AI architecture.
Bridging the Clouds: You will improve our existing tooling and implement new, seamless integrations between our AWS and GCP environments.
Defining our State: You’ll use Terraform to manage and define our entire data infrastructure through code, ensuring reproducibility and transparency across the stack.</p>
<p>Requirements:
K8s Expertise: You have a solid understanding and practical experience with Kubernetes, specifically working with Operators and Helm to manage complex application lifecycles.
The Engineer&#39;s Mindset: You are proficient in Python or Java and enjoy writing clean, efficient code to solve infrastructure challenges.
Cloud Native: You are comfortable working in at least one of the major cloud providers (AWS or GCP) and understand how to get the best out of their managed services.
Optimising and refine: current data infrastructure, and deploying greenfield kube native OSS projects</p>
<p>Bonus points if you have:
Experience with SQL-based transformation workflows, specifically using dbt within BigQuery.
Familiarity with streaming and ingestion tech like Kafka or Debezium.
A background in Linux administration or data management best practices.</p>
<p>Interview process:
Interviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you!
Our interviews are conversational and we want to get the best from you, so come with questions and be curious.
In general, you can expect the below, following a chat with one of our Talent Team:
Stage 1 - 30 minutes with one of the team
Stage 2 - Take-home challenge
Stage 3 - 60 minutes technical interview with two team members
Stage 4 - 45 minutes final with two data executives</p>
<p>Benefits:
25 days holiday (plus take your public holiday allowance whenever works best for you)
An extra day’s holiday for your birthday
Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off
16 hours paid volunteering time a year
Salary sacrifice, company-enhanced pension scheme
Life insurance at 4x your salary &amp; group income protection
Private Medical Insurance with VitalityHealth including mental health support and cancer care.
Partner benefits include discounts with Waitrose, Mr&amp;Mrs Smith and Peloton
Generous family-friendly policies
Perkbox membership giving access to retail discounts, a wellness platform for physical and mental health, and weekly free and boosted perks
Access to initiatives like Cycle to Work, Salary Sacrificed Gym partnerships and Electric Vehicle (EV) leasing</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, Java, Terraform, AWS, GCP, SQL, dbt, BigQuery, Kafka, Debezium, Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank operating in the UK, employing over 3,000 people across multiple locations.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/1EA5EDDAD9</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>b1e8058a-2ea</externalid>
      <Title>Data Science Manager</Title>
      <Description><![CDATA[<p>As the Manager of Data Science, Games Tech, you will be a transformational leader, responsible for guiding and inspiring a dedicated team of data scientists and machine learning engineers. In this role, you’ll drive the creation of groundbreaking data solutions that enhance gameplay, improve user engagement, and optimise business outcomes.</p>
<p><strong>Key Leadership Responsibilities</strong></p>
<ul>
<li>Mentorship &amp; Development: Provide ongoing mentorship, coaching, and professional development opportunities to foster growth and enhance team performance.</li>
<li>Partnerships: Act as a trusted partner across the organisation, advocating for data-driven decision-making and empowering business units to adopt data products.</li>
<li>Ownership &amp; Accountability: Assume full accountability for the data science project execution to final integration and outcome assessment, ensuring that your team delivers impactful results on time and within scope.</li>
<li>Insight Communication: Translate sophisticated analytical insights into actionable recommendations, communicating them to the senior leadership team to advise critical business decisions, with the ability to encourage and influence stakeholders.</li>
</ul>
<p><strong>Key Technical Responsibilities</strong></p>
<ul>
<li>Data Science Best Practices: Drive best practices in A/B-testing, predictive modelling, user clustering and reinforcement learning, to continually set the standard on data science benefit.</li>
<li>Engineering Best Practices: Be responsible for the implementation of the best software engineering practices for internal tools and ML/RL model development, define software architecture standards, implement code review practices, auto-tests, improve observability, reproducibility and monitoring of ML/RL solutions.</li>
<li>Infrastructure Ownership: Own the development of analytical frameworks, including A/B testing (using Bayesian Inference and contextual multi-armed bandits techniques) and other data science tooling. Ensuring scalability, accuracy, and reliability across projects.</li>
<li>Product &amp; Engineering Collaboration: Coordinate integration of analytical solutions into games and platforms, partnering closely with product and engineering to ensure end-to-end solution success.</li>
</ul>
<p><strong>What we need from you</strong></p>
<ul>
<li>Expertise in clustering, predictive modelling, reinforcement learning, and Bayesian statistics.</li>
<li>PHD or MSc or equivalent experience in Data Science, Computer Science, Statistics, Physics or related field</li>
<li>5+ years of Data Science experience with a minimum of 2 years in a leadership role</li>
<li>Practical experience in software engineering, proven track record in design and development of the customer-facing products</li>
<li>Experience in ML Ops and deploying machine learning models at scale.</li>
<li>Proficiency in Python, and familiarity with data processing technologies (e.g., Kafka, Spark) and/or cloud platforms (e.g., GCP, AWS, or Azure).</li>
<li>Ability to work on a hybrid work basis requiring at least 3 days a week in our central London office</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>clustering, predictive modelling, reinforcement learning, Bayesian statistics, Python, Kafka, Spark, GCP, AWS, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Product Madness</Employername>
      <Employerlogo>https://logos.yubhub.co/productmadness.com.png</Employerlogo>
      <Employerdescription>Product Madness is a global gaming company that creates top-grossing, leading titles in the social casino genre, including Heart of Vegas, Lightning Link, Cashman Casino.</Employerdescription>
      <Employerwebsite>https://www.productmadness.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/London-United-Kingdom/Data-Science-Manager_R0020843-1</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>d4dabbbc-b6f</externalid>
      <Title>Principal Data Scientist</Title>
      <Description><![CDATA[<p>Are you ready to join a world-class team and make a significant impact on the gaming industry? At Aristocrat, we aim to bring happiness to life through the power of play. We seek a Principal Data Scientist to help us reach our ambitious goals. You will have a vital role in enhancing gameplay, boosting player engagement, and improving business outcomes with your advanced data expertise. This opportunity allows you to work on innovative projects, collaborate with diverse teams, and guide critical initiatives that will develop the future of our leading games.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-impact data science initiatives end-to-end, including problem framing, methodology selection, experiment development, implementation partnership, and impact measurement.</li>
<li>Build and deliver machine learning and reinforcement learning solutions to improve player engagement, retention, monetization, and operational outcomes.</li>
<li>Lead the modeling framework for complex systems, guaranteeing comprehensive evaluation and monitoring of causal inference, uplift modeling, sequential decisioning, bandits/reinforcement learning, and forecasting.</li>
<li>Partner with game teams to define success metrics, guardrails, and decision frameworks, translating analytical results into actionable product and operational actions.</li>
<li>Define and uphold engineering standards and guidelines for model development, including validation, uncertainty, reproducibility, and bias/quality checks.</li>
<li>Drive scalable experimentation with A/B and Multi-armed bandit testing frameworks, power analysis, variance reduction, and online-offline alignment.</li>
<li>Work together with Data Engineering, MLOps, and Game Tech teams to guarantee dependable data foundations, feature accessibility, and model deployment pathways.</li>
<li>Build internal data products to improve the speed and quality of decision-making, such as AB-test calculators, decision tools, and automated insights.</li>
<li>Provide technical leadership through building and code reviews, mentoring, and coaching, improving the standard of data science craft across the organization.</li>
<li>Serve as a reliable collaborator throughout the organization, promoting data-informed decision-making and enabling business units to embrace data products.</li>
<li>Translate complex analytical insights into actionable recommendations, presenting them to senior leadership to inform critical business decisions and encourage collaborators.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PhD or MSc in Data Science, Computer Science, Statistics, Physics, Mathematics, or a related quantitative field, 5+ years of professional data science experience, Demonstrated proficiency in clustering, predictive modeling, reinforcement learning, and Bayesian statistics, Hands-on experience in software engineering, MLOps, and deploying machine learning models at scale, Proficiency in SQL, Python, and familiarity with big data technologies (e.g., Kafka, Spark) and/or cloud platforms (e.g., GCP, AWS, or Azure), Industry knowledge: Experience in gaming or digital entertainment is a strong plus</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Aristocrat</Employername>
      <Employerlogo>https://logos.yubhub.co/aristocrat.com.png</Employerlogo>
      <Employerdescription>Aristocrat is a global gaming company with a portfolio of regulated land-based gaming, social casino, and regulated online real money gaming products.</Employerdescription>
      <Employerwebsite>https://www.aristocrat.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/London-United-Kingdom/Principal-Data-Scientist_R0020855</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>cb592721-c78</externalid>
      <Title>Associate DevOps Engineer</Title>
      <Description><![CDATA[<p><strong>Associate DevOps Engineer991</strong></p>
<p><strong>What we&#39;re all about.</strong></p>
<p>Do you ever have the urge to do things better than the last time? We do. And it&#39;s this urge that drives us every day. Our environment of discovery and innovation means we&#39;re able to create deep and valuable relationships with our clients to create real change for them and their industries. It&#39;s what got us here – and it&#39;s what will make our future. At Quantexa, you&#39;ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions. 41% of our colleagues come from an ethnic or religious minority background. We speak over 20 languages across our 47 nationalities, creating a sense of belonging for all.</p>
<p><strong>We&#39;re heading in one direction, the future. We&#39;d love you to join us.</strong></p>
<p>At Quantexa we believe that people and organisations make better decisions when those decisions are put in context – we call this Contextual Decision Intelligence. Contextual Decision Intelligence is the new approach to data analysis that shows the relationships between people, places and organisations - all in one place - so you gain the context you need to make more accurate decisions, faster.</p>
<p><strong>What will you be doing?</strong></p>
<p>You&#39;ll be joining one of our DevOps teams in our R&amp;D department working on the Quantexa Cloud Platform and accompanying solutions. The platform is comprised of a landscape of low-maintenance, on-demand, and highly secure environments. Our environments host our software for our customers and partners to use, they also service a variety of internal use cases including underpinning the work of our R&amp;D teams to develop Quantexa Platform software.</p>
<p>You&#39;ll be heavily involved with our cloud-based technical infrastructure, with responsibilities surrounding improving the availability and resilience of our platform, improving its usability and security, ensuring we stay at the forefront of technical innovation, and reducing toil across our estate.</p>
<p>You will also work alongside our software engineering teams to leverage DevOps techniques to support our software release activities and work on unique cloud-based product offerings for our customers to use in their own DevOps processes on their own Cloud estate.</p>
<p><strong>Our tech stack</strong></p>
<ul>
<li>A strong focus on Kubernetes &amp; GitOps, utilising tools like ArgoCD and Istio</li>
<li>Infrastructure Management - CasC, IasC (Terraform, Docker, Ansible, Packer)</li>
<li>Hybrid public Cloud, primarily GCP &amp; Azure, but also some AWS</li>
<li>DevOps tooling/automation with the best tool for the job, commonly Bash, Python, Groovy, Golang</li>
<li>Provisioning stack includes Elasticsearch, Spark, PostgreSQL, Valkey, Airflow, Kafka, etcd</li>
<li>Log and metric aggregation with Fluentd, Prometheus, Grafana, Alertmanager</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>We are looking for candidates who:</strong></p>
<ul>
<li>Take pride in designing, building and delivering high quality well engineered solutions to complex problems</li>
<li>Take a big picture approach to solving problems, taking care to ensure that the solution works well within the wider system</li>
<li>Commercial or non-commercial experience with programming/scripting/automation</li>
<li>Good appreciation for information security principals</li>
</ul>
<p><strong>Experience in the following would be beneficial:</strong></p>
<ul>
<li>Experience with infrastructure management and general Linux administration</li>
<li>Experience with software build and release engineering</li>
<li>Exposure to a handful of the key parts of our tech stack listed above</li>
</ul>
<p><strong>Benefits</strong></p>
<p><strong>Why join Quantexa?</strong></p>
<p>Our perks and quirks.</p>
<p>What makes you Q will help you to realize your full potential, flourish and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>
<p>We offer:</p>
<ul>
<li>Competitive salary and Company Bonus</li>
<li>Flexible working hours in a hybrid workplace &amp; free access to global WeWork locations &amp; events</li>
<li>Pension Scheme with a company contribution of 6% (if you contribute 3%)</li>
<li>25 days annual leave (with the option to buy up to 5 days) + birthday off!</li>
<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>
<li>Family: Enhanced Maternity, Paternity, Adoption, or Shared Parental Leave</li>
<li>Private Healthcare with AXA</li>
<li>EAP, Well-being Days, Gym Discounts</li>
<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>
<li>Workplace Nursery Scheme</li>
<li>Team&#39;s Social Budget &amp; Company-wide Summer &amp; Winter Parties</li>
<li>Tech &amp; Cycle-to-Work Schemes</li>
<li>Volunteer Day off</li>
<li>Dog-friendly Offices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, GitOps, ArgoCD, Istio, Infrastructure Management, CasC, IasC, Terraform, Docker, Ansible, Packer, Hybrid public Cloud, GCP, Azure, AWS, DevOps tooling/automation, Bash, Python, Groovy, Golang, Elasticsearch, Spark, PostgreSQL, Valkey, Airflow, Kafka, etcd, Fluentd, Prometheus, Grafana, Alertmanager</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Quantexa is a software company providing Contextual Decision Intelligence, helping organisations make better decisions by showing the relationships between people, places and organisations.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/imLeMwxTKuwvDpxHC2mvRB/hybrid-associate-devops-engineer-in-london-at-quantexa</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2f98eac1-9e4</externalid>
      <Title>Backend Kotlin Developer (Senior)</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>At Kody, we&#39;re redefining how businesses take payments and access financial services. As a fast-growing Fintech, we partner with some of the most recognised names in hospitality, F&amp;B, and retail across Hong Kong, helping them modernise payments, settlement, and digital financial experiences.</p>
<p>Our Hong Kong tech team runs like a startup within a scaling company, i.e. agile, innovative, and deeply product-driven. We&#39;re now looking for a Senior Backend Engineer to help us build the next generation of payment and settlement infrastructure.</p>
<p><strong>Why Join Us?</strong></p>
<ul>
<li>Work with industry-leading brands in Hong Kong and across the region</li>
<li>Join a fast-moving, innovative tech culture with the stability of a growing international company</li>
<li>Build with modern tools and stacks: Kotlin, Kafka, Kubernetes, Redis, PostgreSQL, and cloud-native frameworks</li>
<li>Enjoy flexibility while working with a passionate, highly skilled engineering team</li>
<li>Access learning and development and the opportunity to travel for collaboration with our global tech teams</li>
</ul>
<p><strong>Your Role</strong></p>
<p>As a Senior Backend Engineer, you&#39;ll be at the heart of our platform, designing and scaling systems that power Kody&#39;s products, from payment gateways and settlement systems to mobile POS integrations and financial service APIs.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain backend services and APIs (REST/gRPC)</li>
<li>Contribute to backend architecture and ensure system reliability and scalability</li>
<li>Debug, troubleshoot, and optimise performance in production systems</li>
<li>Collaborate with product, QA, and frontend teams to deliver robust, maintainable solutions</li>
<li>Write clean, efficient, and well-tested code</li>
<li>Stay current with modern backend and cloud technologies</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience in backend software development (Kotlin or Java)</li>
<li>Strong understanding of microservices, cloud deployment (AWS/GCP), and CI/CD pipelines (GitHub Actions)</li>
<li>Proficiency in SQL/NoSQL databases (PostgreSQL, MySQL, Redis)</li>
<li>Familiar with Docker, Kubernetes, and event-driven systems (Kafka)</li>
<li>Solid understanding of APIs (REST, gRPC) and version control (GitHub)</li>
<li>Familiarity with agile development processes</li>
<li>Fluent in English (Mandarin or Cantonese a plus)</li>
<li>Fintech, payment, or settlement background is a plus</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation and meaningful work impact</li>
<li>Equity available</li>
<li>Flexibility and a global, tech-driven culture</li>
<li>Career growth opportunities within a fast-scaling fintech</li>
<li>Work with ambitious teammates who think big and execute fast</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kotlin, Kafka, Kubernetes, Redis, PostgreSQL, cloud-native frameworks, microservices, cloud deployment, CI/CD pipelines, SQL/NoSQL databases, Docker, event-driven systems</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Kody</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Kody is a fast-growing Fintech that partners with hospitality, F&amp;B, and retail businesses in Hong Kong to modernise payments, settlement, and digital financial experiences.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qAZLgjp7A7TbnzrkimKdmy/backend-kotlin-developer-(senior)-in-admiralty-at-kody</Applyto>
      <Location>Admiralty, Kowloon, Hong Kong</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ee2fcbdc-fc4</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>56dc9a51-e66</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. It is a mid-size player with a supportive, entrepreneurial spirit.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>684c9a64-a14</externalid>
      <Title>Software Engineer, Associate</Title>
      <Description><![CDATA[<p>Are you interested in building innovative technology that shapes the financial markets? Do you like working at the speed of a startup, and tackling some of the world&#39;s most interesting challenges? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems.</p>
<p>We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. Our technology empowers millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being.</p>
<p>Our ETF development team is part of the Aladdin Engineering group. We manage a software platform that oversees the global iShares investment process. Together, we develop cutting-edge technology that transforms the interaction between information, people, and technology for global investment firms.</p>
<p>As a member of Aladdin Engineering, you will be working in a fast-paced and highly complex environment, collaborating with cross-functional teams in a multi-office, multi-country environment to define, design, and ship high-quality software solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, deliver, and maintain applications with focus on high efficiency, high availability, concurrent and fault-tolerant software</li>
<li>Demonstrate technical leadership of software design &amp; architecture to support strategic product roadmap</li>
<li>Collaborate with project managers, technical leads, and business analysts to contribute throughout the SDLC cycle</li>
<li>Manage stakeholders for driving business decisions, negotiating priorities, and partner with various business teams to drive strategy and technology adoption</li>
<li>Ensure scale, resilience and stability through risk identification and mitigation, quality code reviews, creating robust test suites, and providing level two support</li>
</ul>
<p>Skills &amp; Experience:</p>
<ul>
<li>Hands-on programming experience in Java and/or Python with OO skills and design patterns</li>
<li>Exposure to building microservices and APIs ideally with Kafka or gRPC</li>
<li>Experience working with relational and NoSQL databases (such as SQL Server, Apache Cassandra)</li>
<li>Experience with DevOps, continuous integration, and continuous deployment (CI/CD) pipelines, and tools like Azure DevOps</li>
<li>Strong problem-solving, analytical, and software architecture skills</li>
<li>Experience in partnering with other teams, sponsors, and user groups who are on the same product journey</li>
<li>Ability to work in Agile/Scrum development environments with strong teamwork, communication, and time management skills</li>
<li>Innovative and a thought leader around new/cutting-edge technologies</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Exposure to and innovative thinking around AI workflows and code generation</li>
<li>Experience with financial applications</li>
<li>Experience with cloud native tools (such as Kubernetes, Docker) and cloud platforms (such as Azure, AWS, or GCP)</li>
<li>Exposure to high scale distributed technologies such as Kafka, Ignite, Redis</li>
<li>Experience or real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>B.S. / M.S. college degree in Computer Science, Engineering, or related subject area</li>
<li>3+ years of hands-on development exposure</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, OO skills, design patterns, microservices, APIs, Kafka, gRPC, relational databases, NoSQL databases, DevOps, continuous integration, continuous deployment, CI/CD pipelines, Azure DevOps, problem-solving, analytical skills, software architecture, Agile/Scrum development environments, teamwork, communication, time management, AI workflows, code generation, financial applications, cloud native tools, cloud platforms, high scale distributed technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that invests and protects over $14 trillion of assets.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/8PKqQ6FiWNCs2s8YbwAy9C/software-engineer%2C-associate-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>d7e78450-112</externalid>
      <Title>Full Stack Java Developer, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>BlackRock&#39;s Securities Lending business generates both incremental alpha for clients and significant firm revenue—about $3B in gross revenue, including $700M in direct revenue. The business is run by BlackRock Global Markets and Index Investments (BGM), with operations supported by GIO, and is used by 150+ global users through the SecLending Technology Platform. With 90% of trades executed automatically and ~140 orders per second, the platform operates as a high availability, low latency system.</p>
<p>Securities Lending (SecLending) involves temporarily transferring securities to a borrower for a fee while the lender retains ownership, with the borrower returning the securities on demand or at the end of the agreement.</p>
<p>About Aladdin</p>
<p>Aladdin is an operating system for investment managers that connects the information, people, and technology needed to manage portfolios in real time. The platform combines sophisticated risk analytics with comprehensive portfolio management, trading, compliance, and operations tools to power informed decision-making, effective risk management, and scalable operational workflows. Aladdin is delivered to a global community of over 1,000 clients, with more than 100,000 end users worldwide.</p>
<p>Job Overview</p>
<p>For Securities Lending to remain market leading, we are not only transforming the capabilities we offer to our clients but also invest in the re-architecture and modernization of the underlying platform and infrastructure. We must evolve Aladdin’s Sec Lending technology delivering a resilient, AI enabled, API-first platform that enables business growth, scale and automation, operational efficiency, and data driven decision making, while positioning Sec Lending technology as a commercial-grade product.</p>
<p>As Engineer in Inventory Management Squad, your responsibility involves design, build and maintain all Aladdin workflows related to managing the universe of lendable inventory and broadcasting available inventory to platforms and borrowers. This includes but is not limited to, reference data, Analytics, Signals, Grouping &amp; Strategies, Trading Overlays and publishing inventory and targeted availability. Complexity of the Securities Lending business may require “cross-squad” partnership.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build and deploy Securities Lending apps end to end owning frontend, backend APIs, data/AI features.</li>
<li>Write reusable, maintainable, and extensible code, and create documentation for team members.</li>
<li>Collaborate with cross-functional, globally distributed teams to deliver high-performing and reliable software solutions aligned with business goals.</li>
<li>Champion a culture of quality within the organization, driving awareness and consistency with Quality standards.</li>
<li>Guide and mentor junior team members from both technical and functional standpoint. Foster a culture of continuous improvement and accountability within the team.</li>
<li>Break down complex technical problems into clear, actionable tasks and make informed architectural decisions that balance long-term strategy with short-term needs.</li>
<li>Proactively identify project risks, create mitigation plans, and escalate issues when necessary to maintain project timelines and quality.</li>
<li>Oversee production readiness, early life support, and post-release stability including root cause analysis and remediation strategies.</li>
<li>Stay current with emerging technologies (including Kafka and other streaming platforms), assess their potential impact, and guide their adoption within product and platform roadmaps.</li>
<li>Promote continuous learning and facilitate upskilling initiatives to match evolving tech landscapes.</li>
<li>Operate independently with minimal supervision while providing strategic guidance to junior engineers and stakeholders alike.</li>
<li>Some experience or a real interest in finance, investment processes, and an ability to translate business problems into technical solutions. Exposure to Securities Lending is a bonus but not mandatory skill.</li>
<li>Experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups.</li>
</ul>
<p>Nice to have and opportunities to learn</p>
<ul>
<li>Exposure to building microservices and APIs ideally with REST, Kafka or gRPC.</li>
<li>Experience working in an agile development team or on open-source development projects.</li>
<li>Experience with optimization, algorithms or related quantitative processes.</li>
<li>Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis</li>
<li>Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud</li>
<li>Experience with DevOps and tools like Azure DevOps</li>
<li>Experience with AI-related projects/products or experience working in an AI research environment.</li>
<li>A degree, certifications or opensource track record that shows you have a mastery of software engineering principles.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring, TypeScript, JavaScript, Microservices, Angular, Open-Source technology stack, Relational database, NoSQL Database, Caching technologies, Distributed systems, Agile methodology, Scrum, TDD, BDD, Kafka, gRPC, Cloud platforms, DevOps, AI-related projects/products, AI research environment</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment products and services to institutional and individual investors. It has over $8 trillion in assets under management.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/pq29LM9g93gEuCMd3Ncs7F/full-stack-java-developer%2C-associate-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ec8eeead-726</externalid>
      <Title>Java Engineer, Aladdin Engineering, Associate</Title>
      <Description><![CDATA[<p><strong>About this role</strong></p>
<p>At BlackRock, technology is the foundation of our business. As a Java Back-End Engineer, you&#39;ll lead by example — architecting, coding, and mentoring teams to build resilient systems that power our global post-trade operations. You&#39;ll design and deliver enterprise-scale software with a focus on reliability, performance, and clean engineering practices.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design and develop robust, high-performance back-end systems using Java 11+ and the Spring Boot ecosystem.</li>
<li>Lead design discussions, code reviews, and architecture sessions with a hands-on approach.</li>
<li>Build and maintain microservices and event-driven systems to process and distribute large-scale financial data.</li>
<li>Develop data integration and pipeline components that connect systems across Snowflake, SQL Server, and real-time streaming platforms.</li>
<li>Implement and optimize Redis-based caching and data stores for low-latency access patterns.</li>
<li>Champion best practices for code quality, testing, automation, and performance tuning.</li>
<li>Collaborate cross-functionally to ensure technical solutions align with product goals and business outcomes.</li>
</ul>
<p><strong>Qualifications / Competencies</strong></p>
<ul>
<li>B.S./M.S. in Computer Science, Engineering, or related discipline.</li>
<li>3+ years of professional experience in Java and object-oriented design.</li>
<li>Strong knowledge of Spring Boot, REST APIs, and enterprise integration patterns.</li>
<li>Deep expertise in SQL Server, including stored procedures, performance tuning, and data modeling.</li>
<li>Experience with Redis for caching or data persistence.</li>
<li>Hands-on exposure to Kafka or similar publish-subscribe systems for real-time event processing.</li>
<li>Familiarity with Snowflake and data pipeline concepts (ETL, batch vs. streaming).</li>
<li>Experience with Agile coding and general understanding of how LLMs are working</li>
<li>Strong focus on clean architecture, maintainability, and production readiness.</li>
</ul>
<p><strong>Our benefits</strong></p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p><strong>Our hybrid work model</strong></p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, SQL Server, Redis, Kafka, Snowflake, Agile coding, Kubernetes, Docker, cloud-native environments, observability tools, scripting experience in Python</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/3vLTpfkn1mYzZFEn6qtubs/java-engineer%2C-aladdin-engineering%2C-associate-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>