<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>b12d10a4-9a6</externalid>
      <Title>Full Stack Software Engineer, Applied AI</Title>
      <Description><![CDATA[<p>About Wonderschool</p>
<p>Wonderschool is a software company that helps small business owners earn more money. They started with childcare providers because it is a large, underserved market where technology can have an outsized impact on revenue. But the model scales: they are building a platform and a playbook that will expand into other verticals.</p>
<p>The Vision</p>
<p>The North Star for this role is a system where a user signal automatically triggers a chain of AI agents that writes the product requirements, designs the solution, builds the code, reviews it, and ships it. No human in the loop for the routine stuff. Humans define what matters, train the agents, and review edge cases.</p>
<p>What You Will Do</p>
<p>Own the AI Development Loop</p>
<p>Review and iterate on AI-generated code and feed that review back as training signal to make the agents better over time Architect the codebase for AI legibility: clean data models, strong documentation, deprecation of legacy patterns that cause agents to fail Build and maintain the automated product pipeline: signal detection, agent-generated requirements, AI-driven development, AI code review, commit When agents break in production, you own the diagnosis and fix</p>
<p>Enable the Business, Not Just the Product</p>
<p>Build tools and workflow infrastructure that let operations, sales, and customer success teams operate the platform themselves, without filing engineering tickets Translate what internal teams need into automated, reliable systems Help non-engineers understand what is possible and then make it happen</p>
<p>Ship Full-Stack Product</p>
<p>Build features across the frontend and backend: React/TypeScript on the front, Node.js or Elixir on the back, Postgres underneath Own the full lifecycle from requirements to production Debug issues across the entire stack</p>
<p>Stay Close to What Is Live</p>
<p>Actively observe the production environment: usage patterns, agent failures, edge cases, provider behavior This role requires a high degree of personal accountability for system health. When things are running well, that is because you have built the monitoring and alerting to catch problems early. Getting there takes commitment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$140,000+</Salaryrange>
      <Skills>AI coding agents, Claude Code, OpenClaw, Codex, Cursor, React/TypeScript, Node.js, Elixir, Postgres, Experience building or operating agentic workflows in production, Experience designing feedback loops that improve AI output quality over time, Background at a startup or high-growth company, Experience with CI/CD, DevOps, or cloud infrastructure, Experience with REST or GraphQL API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Wonderschool</Employername>
      <Employerlogo>https://logos.yubhub.co/wonderschool.com.png</Employerlogo>
      <Employerdescription>Wonderschool builds software that helps small business owners earn more money, initially focusing on childcare providers but expanding into other verticals.</Employerdescription>
      <Employerwebsite>https://www.wonderschool.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/wonderschool/jobs/7712631003</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>37853b28-736</externalid>
      <Title>Senior Platform Infrastructure Engineer</Title>
      <Description><![CDATA[<p>As a Senior Platform Infrastructure Engineer, you&#39;ll own the systems that keep Spade&#39;s core platform fast, reliable, and scalable. You&#39;ll work across the full infrastructure stack , from the cloud services and data pipelines that power our enrichment APIs, to the developer tooling and observability systems that keep our engineering team moving quickly.</p>
<p>This is a high-ownership role on a small team, where your decisions directly shape the reliability of products processing hundreds of millions of transactions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating on our technical vision to build and scale internal infrastructure that engineers and customers love</li>
<li>Developing and maintaining low-latency, massively-scalable infrastructure for data ingestion, reporting, and developer-facing APIs</li>
<li>Continuously improving our engineering best practices by designing and implementing systems that allow us to expand our platform while maintaining high quality and solid observability</li>
<li>Debugging production issues across all levels of the stack</li>
<li>Bringing a strong collaborative approach to deliver value to customers and internal stakeholders at Spade</li>
</ul>
<p>Must-haves include:</p>
<ul>
<li>5+ years of experience building and scaling AWS Cloud Infrastructure (ECS/Fargate, RDS Aurora Serverless, Terraform/CDK etc)</li>
<li>Experience with highly scalable networked APIs, Postgres, data pipelines, and async processing</li>
<li>Experience and opinions with implementing effective observability instrumentation and ensuring alerts are timely and actionable</li>
<li>Experience building tools and systems that empower internal developers to deliver business value rapidly</li>
<li>Experience building systems from scratch, making trade-offs, and executing autonomously in early-stage environments</li>
<li>A product mindset and strong problem-solving skills, with the ability to navigate ambiguity and focus on delivering customer value, for internal developers and the customers they build for</li>
<li>A collaborative mindset, fostering a culture of mentorship, shared success, and continuous improvement</li>
</ul>
<p>Nice-to-haves include:</p>
<ul>
<li>Experience with building and supporting infrastructure in multiple AWS regions</li>
<li>Experience with Python and Django</li>
<li>Experience with Open Telemetry</li>
<li>Experience with Pganalyze</li>
<li>Experience with transaction, merchant, and/or location data</li>
<li>Experience and/or interest in fintech and/or data products</li>
<li>Experience with agentic development infrastructure and workflows</li>
<li>Familiarity with data science and analytics tools such as Pyspark, Databricks, and Delta Lake, and Hex</li>
</ul>
<p>Why join Spade?</p>
<ul>
<li>Be a cultural founder. As an early employee, you’ll play a meaningful role in defining and building our culture.</li>
<li>Get in on the ground floor. We’re a small but well-funded team – joining now comes with limited risk and unlimited upside.</li>
<li>Build the next generation of financial infrastructure. Our products will power the next wave of innovation in fintech, helping our customers deliver better, more transparent products and services to the consumer.</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Competitive compensation and equity package</li>
<li>Full medical, dental, and vision benefits for US-based employees</li>
<li>Life &amp; short-term disability insurance</li>
<li>Unlimited PTO</li>
<li>Early exercise program</li>
<li>Extended post-termination exercise period</li>
<li>401K for retirement planning</li>
<li>Hybrid team, with pet-friendly headquarters in NYC</li>
<li>Paid parental leave</li>
<li>Work from home stipend</li>
</ul>
<p>Diversity &amp; Inclusion at Spade:</p>
<p>Spade is an equal opportunity employer, committed to building a culture that is diverse, equitable, and inclusive. We believe that having people with different backgrounds, experiences, abilities, and perspectives not only helps us build the best products for our customers, but also helps us be the best version of ourselves.</p>
<p>Salary Range:</p>
<p>At Spade, we view total compensation as consisting of salary + equity + benefits. We recruit motivated and high-performing talent, and work to compensate people in line with the value they bring to our team.</p>
<p>We aim to pay fairly and competitively, and consider a number of factors in developing compensation offers. These factors include years and breadth of experience, interview performance, market dynamics, and internal equity.</p>
<p>The anticipated base salary range for this role is between $200,000 and $220,000, and an equity grant.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Between $200,000 and $220,000, with an equity grant</Salaryrange>
      <Skills>AWS Cloud Infrastructure, ECS/Fargate, RDS Aurora Serverless, Terraform/CDK, Highly scalable networked APIs, Postgres, Data pipelines, Async processing, Observability instrumentation, Open Telemetry, Pganalyze, Transaction, merchant, and/or location data, Fintech and/or data products, Agentic development infrastructure and workflows, Data science and analytics tools such as Pyspark, Databricks, and Delta Lake, and Hex</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Spade</Employername>
      <Employerlogo>https://logos.yubhub.co/spade.com.png</Employerlogo>
      <Employerdescription>Spade is a data and AI platform that turns messy transaction strings into structured, verified records. It is a fast-growing, Series B company backed by industry experts and top-tier investors.</Employerdescription>
      <Employerwebsite>https://spade.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/spade/jobs/4686049005</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3709ceac-a62</externalid>
      <Title>Developer Educator</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Developer Educator who is excited about bringing database education to new heights, creating unique and engaging content, helping customers, and building the PlanetScale engineering community at conferences, events, and online.</p>
<p>As a Developer Educator, you will create technical content to help customers and engineers at-large learn how Postgres, MySQL, Vitess, Neki, and the PlanetScale ecosystem works. You will also speak at conferences and events, representing PlanetScale in the database and engineering community. Additionally, you will create and refine product documentation, host livestreams and webinars to engage with and educate our community, and travel to connect with developers at events worldwide.</p>
<p>The ideal candidate is someone who teaches complex technical concepts in accessible ways, loves learning about and explaining full-stack application development, databases, and compute infrastructure, and is a natural communicator who can adapt their message for different audiences. They should thrive in community settings and enjoy building relationships with developers, have excellent writing skills, and be self-motivated and able to manage multiple projects independently.</p>
<p>What you will need includes 2+ years of experience in engineering, solution architecture, technical education, or developer relations, a strong understanding of relational databases (Postgres or MySQL), proven ability to create technical content, experience speaking at conferences or hosting technical presentations and events, and a willingness to travel between 20-50% of the time.</p>
<p>Experience with Vitess or other distributed database technologies, knowledge of distributed systems and database scaling challenges, and previous experience at a database or infrastructure company are also desirable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$130,000 - $200,000 USD</Salaryrange>
      <Skills>Postgres, MySQL, Vitess, Neki, database, full-stack application development, compute infrastructure, distributed database technologies, distributed systems, database scaling challenges</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that offers a database platform for developers to efficiently handle data and workloads of all scales.</Employerdescription>
      <Employerwebsite>https://planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4021848009</Applyto>
      <Location>San Francisco and/or US Remote</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f2c18f58-62e</externalid>
      <Title>Senior/Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for full-stack product engineers to own big parts of our fast-growing marketplace, including our marketplace storefront, next-gen course platform, and recommendation and data systems.</p>
<p>As a senior staff software engineer, you will have a high degree of ownership over our technical systems and product roadmap. You will work with a world-class team, including engineers from Google, Twitter, Venmo, and Udemy, to build a fast-growing marketplace of highly-rated courses, including many million-dollar instructors.</p>
<p>Responsibilities:</p>
<ul>
<li>Own big parts of our fast-growing marketplace, including our marketplace storefront, next-gen course platform, and recommendation and data systems.</li>
<li>Work with a world-class team to build a fast-growing marketplace of highly-rated courses, including many million-dollar instructors.</li>
<li>Collaborate with cross-functional teams to drive product development and growth.</li>
<li>Design, develop, and deploy scalable and efficient software solutions.</li>
<li>Mentor junior engineers and contribute to the growth and development of the team.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience shipping real features to a real user base (excluding personal or school projects).</li>
<li>Experience with React, Next.js, Python, FastAPI, Postgres, and OpenSearch.</li>
<li>Strong understanding of software design patterns and principles.</li>
<li>Excellent communication and collaboration skills.</li>
<li>Ability to work independently and as part of a team.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunity to work with a world-class team and build a fast-growing marketplace of highly-rated courses.</li>
<li>Flexible remote work arrangement.</li>
<li>Professional development opportunities, including training and mentorship.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, Next.js, Python, FastAPI, Postgres, OpenSearch, Cloud computing, Containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Maven</Employername>
      <Employerlogo>https://logos.yubhub.co/maven.com.png</Employerlogo>
      <Employerdescription>Maven is a platform for human expertise on the Internet, offering practical, professional courses on key skills taught by leading experts.</Employerdescription>
      <Employerwebsite>https://maven.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/maven/jobs/4023548004</Applyto>
      <Location>Remote (US time zones)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b36484fe-d79</externalid>
      <Title>Senior Full Stack Engineer - Enterprise Systems</Title>
      <Description><![CDATA[<p>We are seeking a highly capable Senior Full-Stack Software Engineer to build the software systems that enable automated testing, manufacturing, and real-time control of our flight hardware and spacecraft.</p>
<p>You will work across disciplines,hardware, embedded, operations, production, and software,to design full-stack applications that automate and validate critical flight components and support test, integration, and ground control.</p>
<p>This is a high-impact, multidisciplinary role. You&#39;ll own the development of internal tools used to test, operate, and monitor spacecraft,ranging from low-level hardware interfaces to web-based control panels for telemetry and command operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software systems that support automated testing of hardware components and spacecraft systems.</li>
<li>Own UI and backend development for internal tools used in test infrastructure, manufacturing and spacecraft operations.</li>
<li>Develop component- and system-level automated production infrastructure using Python and full-stack tools.</li>
<li>Work closely with satellite operators and manufacturing technicians to build UI workflows that streamline mission operations, command execution, and telemetry visualization.</li>
<li>Drive cross-functional collaboration on software architecture, technical design, and hardware/software test strategy.</li>
<li>Contribute to recruiting and interviewing efforts as we scale the team.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of professional experience as a full-stack software engineer (excluding internships).</li>
<li>Strong Proficiency with React JavaScript/TypeScript and Python.</li>
<li>Experience with FastAPI or similar frameworks.</li>
<li>Experience working with relational/SQL databases (e.g. Postgres).</li>
<li>Experience building and maintaining REST APIs and backend services.</li>
<li>Comfortable working in Linux, using shell tools, and managing source control with Git.</li>
<li>Experience with SQL/relational databases (e.g., Postgres).</li>
<li>Comfortable rapidly prototyping and fully developing user interface concepts with focus on overall user experience.</li>
<li>Comfortable diving into all layers to implement both front-end and back-end components of new features.</li>
<li>Experience deploying backend services to cloud environments (e.g., AWS, GCP) and on-prem.</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Experience developing tools for hardware or embedded systems.</li>
<li>Experience with Docker, Kubernetes, or containerized deployments.</li>
<li>Familiarity with design systems and rapid UI prototyping for internal tooling.</li>
<li>Background in satellite operations, aerospace, or real-time telemetry systems.</li>
<li>Experience with test instrumentation, schematics, and debugging interfaces between software and hardware.</li>
<li>Strong understanding of automated test strategies and validation workflows for embedded or hardware-integrated systems.</li>
<li>Experience building and deploying sw for a manufacturing environment.</li>
<li>Experience reading datasheets, debugging analog/digital interfaces, and working with tools like oscilloscopes or logic analyzers.</li>
</ul>
<p>What we offer: All our positions offer a compensation package that includes equity and robust benefits. Base pay is just one component of Astranis’s total rewards package. Your compensation also includes a significant equity package via incentive stock options, high-quality company-subsidized healthcare, disability and life insurance, 401(k) retirement planning, flexible PTO, and free on-site catered meals.</p>
<p>Astranis pay ranges are informed and defined through professional-grade salary surveys and compensation data sources. The actual base salary offered to a successful candidate will additionally be influenced by a variety of factors including experience, credentials &amp; certifications, educational attainment, skill level requirements, and the level and scope of the position.</p>
<p>Base Salary $145,000-$210,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Base Salary $145,000-$210,000 USD</Salaryrange>
      <Skills>React, JavaScript, TypeScript, Python, FastAPI, Postgres, REST APIs, Backend services, Linux, Git, SQL, Relational databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4613477006</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2b8ee578-505</externalid>
      <Title>Software Engineer Intern - Data Platform (Summer 2026)</Title>
      <Description><![CDATA[<p>As a Software Engineer Intern, you&#39;ll build and extend the systems that ingest, store, and query the massive streams of telemetry flowing from our satellite fleet in real time.</p>
<p>From designing data pipelines that handle thousands of channels per spacecraft to building monitoring rules that catch anomalies before they become problems, your work will give mission operators the visibility they need to keep satellites healthy and online.</p>
<p>You&#39;ll be working at the intersection of data engineering and space operations, building tools that turn raw satellite signals into actionable insight.</p>
<p>This role will contribute to both our commercial and US government programs.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build monitoring rules and anomaly detection capabilities for our telemetry data platform</li>
</ul>
<ul>
<li>Contribute across the data platform by building out client libraries, data pipelines, query interfaces, and APIs</li>
</ul>
<ul>
<li>Collaborate with mission operations and engineering teams to define monitoring requirements and data needs</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Currently pursuing a B.S. or M.S. in Computer Science, or equivalent degree</li>
</ul>
<ul>
<li>Strong proficiency in Python</li>
</ul>
<ul>
<li>Experience and understanding of databases (Postgres, time-series databases, etc)</li>
</ul>
<ul>
<li>Experience and understanding of API design (REST, gRPC, Protocol Buffers)</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Experience with time-series data or telemetry systems</li>
</ul>
<ul>
<li>Experience with async Python programming</li>
</ul>
<ul>
<li>Experience with Kubernetes</li>
</ul>
<p>The base pay for this position is $29.00 per hour.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$29.00 per hour</Salaryrange>
      <Skills>Python, Postgres, time-series databases, API design, REST, gRPC, Protocol Buffers, time-series data, telemetry systems, async Python programming, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has a team of 450 engineers and entrepreneurs and has raised over $750 million from some of the world&apos;s best investors.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4667477006</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dce082b4-669</externalid>
      <Title>Senior Software Engineer - Full Stack (Network Software)</Title>
      <Description><![CDATA[<p>As a Senior Full Stack Engineer focused on Network Software, you will be responsible for designing and implementing the software that will enable us to design, build and manage Satellite and ground Networks.</p>
<p>The tools you build will be used by our customers to view the performance of satellites and internally to plan, design and optimize our satellite networks and its performance. This role will contribute to both commercial and US Government programs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the design, build, and implementation of Satellite and network management software</li>
<li>Own backend and UI development for internal tools across test infrastructure, mission control, and business operations</li>
<li>Build intuitive web-based UIs for telemetry dashboards, test orchestration, command execution, and workflow monitoring</li>
<li>Automate manual workflows to increase operational velocity across engineering, network, and satellite operations</li>
<li>Work closely with the mission operations teams to create software that helps design, build and manage Satellites and Networks</li>
<li>Design high-performance, reliable, mission-critical software that sends commands to space</li>
<li>Collaborate with multidisciplinary teams to define software requirements, architectures, and designs</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related technical field</li>
<li>7+ years of experience as a full stack developer in industry</li>
<li>Experience with system level architecture and big data design</li>
<li>Strong proficiency with React JavaScript/TypeScript and Python</li>
<li>Experience with FastAPI or similar frameworks</li>
<li>Experience working with relational/SQL databases (e.g. Postgres)</li>
<li>Experience designing and maintaining REST or GraphQL APIs</li>
<li>Comfortable rapidly prototyping and fully developing user interface concepts with focus on overall user experience</li>
<li>Comfortable diving into all layers to implement both front-end and back-end components of new features</li>
<li>Experience deploying backend services to cloud environments (e.g., AWS, GCP)</li>
<li>Comfortable working in Linux, using shell tools, and managing source control with Git</li>
<li>Familiarity with Docker, Kubernetes, or other container-based deployment strategies</li>
<li>Highly motivated, self-starting, and able to perform duties autonomously without supervision</li>
</ul>
<p>Bonus Requirements:</p>
<ul>
<li>Experience with developing high availability systems</li>
<li>Experience with C++</li>
<li>Experience in Satellite operations / Network Management Software</li>
<li>Experience with application deployment operations and release/version management (CI/CD) and cluster management</li>
<li>Experience with design systems and rapid UI prototyping for internal tooling</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000-$215,000 USD</Salaryrange>
      <Skills>React JavaScript/TypeScript, Python, FastAPI, Postgres, REST or GraphQL APIs, Linux, Git, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4070210006</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>ba064711-c52</externalid>
      <Title>Staff Software Engineer: Applied AI</Title>
      <Description><![CDATA[<p><strong>About Flexport:</strong></p>
<p>The recent global supply chain crisis has put Flexport center stage as we continue to play a pivotal role in how goods move around the world.</p>
<p><strong>Staff Software Engineer: Applied AI</strong></p>
<p>Every day, thousands of shipments cross borders, change hands, and hit unexpected problems. For decades, fixing those problems meant phone calls, emails, and humans heroically firefighting. We think that&#39;s about to change completely.</p>
<p>We&#39;ve been building AI agents that spot trouble before it happens, reroute shipments, and keep goods moving,with our team of experts in the loop where it counts. The early results have been jaw-dropping. We&#39;re now going all in on a future where supply chains run themselves, and we&#39;re looking for the people who want to build that future with us.</p>
<p>This isn&#39;t a role where you join a team and pick up tickets. You&#39;ll find the highest-leverage problems, design the solutions, and ship them to operators moving freight across 112 countries. If that sounds like your idea of a good time, read on.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>You&#39;ll build the cutting-edge agents and AI-powered applications that make Flexport&#39;s operations smarter, faster, and increasingly autonomous. That means:</p>
<ul>
<li>Designing and shipping end-to-end AI agents that handle real logistics work,customs compliance, document processing, exception management, and more</li>
</ul>
<ul>
<li>Crossing organizational boundaries to understand problems deeply, align stakeholders, and get things done without waiting for permission</li>
</ul>
<ul>
<li>Moving fast from idea to production, treating every deployment as a learning opportunity</li>
</ul>
<ul>
<li>Working directly with operators, domain experts, and leadership to turn ambiguous problems into shipped solutions</li>
</ul>
<p><strong>You Should Have</strong></p>
<ul>
<li>10+ years of software engineering experience; you&#39;ve built with LLMs extensively,whether in production, side projects, or just because you couldn&#39;t stop yourself</li>
</ul>
<ul>
<li>Strong product instincts: you can identify what&#39;s worth building, not just execute on a spec</li>
</ul>
<ul>
<li>LLM fluency: agent patterns, RAG, prompt engineering, tool use, evaluation. You know what works in production and what looks good in demos</li>
</ul>
<ul>
<li>Full-stack capability in TypeScript and Next.js,you can own a feature end to end</li>
</ul>
<ul>
<li>An entrepreneurial drive: you thrive in ambiguity, move fast, and don&#39;t wait to be told what to do</li>
</ul>
<ul>
<li>Excellent communication: you can work across teams and disciplines to get alignment and unblock yourself</li>
</ul>
<ul>
<li>An audacious appetite for impact: you&#39;re not here to maintain,you&#39;re here to change how global trade works</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with workflow orchestration (Temporal, etc.)</li>
</ul>
<ul>
<li>Experience building internal tools or operator-facing applications</li>
</ul>
<p><strong>How We Work</strong></p>
<ul>
<li>We come to the office 3 times a week to hang out, whiteboard, and ship together</li>
</ul>
<ul>
<li>We have the latest hardware and software,including frontier AI models on day one</li>
</ul>
<ul>
<li>We&#39;re agile, but not dogmatic. Teams decide how they work best</li>
</ul>
<ul>
<li>Stack: Next.js, TypeScript, Postgres, Snowflake. For AI: Anthropic, OpenAI, Google AI APIs</li>
</ul>
<p><strong>Why This Role Is Special</strong></p>
<ul>
<li>Your work is visible: This is a small, senior team reporting directly to the VP of Engineering. What you build gets noticed, and your ideas shape the direction</li>
</ul>
<ul>
<li>Real stakes: thousands of containers, 112 countries, $19 billion of goods. Your agents will have immediate, measurable impact</li>
</ul>
<ul>
<li>Full ownership: from problem definition to production deployment, it&#39;s yours</li>
</ul>
<ul>
<li>You&#39;re early: the playbook for applied AI in enterprise logistics doesn&#39;t exist yet,you&#39;ll help write it</li>
</ul>
<p><strong>What&#39;s in it for you:</strong></p>
<ul>
<li>An opportunity to contribute to one of the fastest-growing companies, where you’ll have the chance to create a global impact while being a part of a thriving multinational environment</li>
</ul>
<ul>
<li>Daily catered lunches incl. vegetarian options, breakfast, snacks and soft drinks available in our office on daily basis</li>
</ul>
<ul>
<li>Commute expenses: Flexport will cover home-office commuting costs for employees living outside of Amsterdam</li>
</ul>
<ul>
<li>25 working days as vacation days based on full time employment.</li>
</ul>
<ul>
<li>Health insurance: Flexport offers a collective health insurance plan including a basic package and any available additional packages. Your monthly premium is fully paid by Flexport.</li>
</ul>
<ul>
<li>A defined pension contribution scheme</li>
</ul>
<ul>
<li>Equity program: every team member becomes a shareholder, aligning our success with yours. As a private company in a multi-trillion dollar industry, you have a direct stake in our collective growth and success.</li>
</ul>
<ul>
<li>Employee Assistance Program through Aetna Resources for Living: Flexport provides an employer-sponsored program at no cost to you and your household members</li>
</ul>
<ul>
<li>Parental leave benefit: Flexport is here to support you and your families in one of the most important times in life – the birth of a child! Our parental leave program allows both mothers and partners to take time off from work for pregnancy, childbirth, and to bond with your new child.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLMs, TypeScript, Next.js, Postgres, Snowflake, Anthropic, OpenAI, Google AI APIs, workflow orchestration, internal tools, operator-facing applications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Flexport</Employername>
      <Employerlogo>https://logos.yubhub.co/flexport.com.png</Employerlogo>
      <Employerdescription>Flexport is a logistics company that specializes in international trade and supply chain management. It is a privately held company with a valuation of over $19 billion.</Employerdescription>
      <Employerwebsite>https://www.flexport.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/flexport/jobs/7311883</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6cd50625-36d</externalid>
      <Title>Software Engineer - Full Stack (Network Software)</Title>
      <Description><![CDATA[<p>As a Full Stack Engineer on our Network Software team, you will be responsible for designing and implementing the software that enables us to design, build, and manage Satellite and ground Networks. The tools you build will be used by our customers to view and control the performance of satellites and internally to plan, design, and optimize our satellite networks and its performance.</p>
<p>This role will contribute to both commercial and US Government programs.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the design, build, and implementation of Satellite and network management software</li>
<li>Owning backend and UI development for internal tools across test infrastructure, mission control, and business operations</li>
<li>Building intuitive web-based UIs for telemetry dashboards, test orchestration, command execution, and workflow monitoring</li>
<li>Automating manual workflows to increase operational velocity across engineering, network, and satellite operations</li>
<li>Working closely with the mission operations teams to create software that helps design, build, and manage Satellites and Networks</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related technical field</li>
<li>4+ years of experience as a full stack developer in industry</li>
<li>Strong proficiency with React JavaScript/TypeScript and Python</li>
<li>Experience with FastAPI or similar frameworks</li>
<li>Experience working with relational/SQL databases (e.g. Postgres)</li>
<li>Experience designing and maintaining REST or GraphQL APIs</li>
<li>Comfortable rapidly prototyping and fully developing user interface concepts with focus on overall user experience</li>
<li>Comfortable diving into all layers to implement both front-end and back-end components of new features</li>
<li>Experience deploying backend services to cloud environments (e.g., AWS, GCP)</li>
<li>Comfortable working in Linux, using shell tools, and managing source control with Git</li>
<li>Familiarity with Docker, Kubernetes, or other container-based deployment strategies</li>
<li>Highly motivated, self-starting, and able to perform duties autonomously without supervision</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000-$170,000 USD</Salaryrange>
      <Skills>React, JavaScript, TypeScript, Python, FastAPI, Postgres, REST, GraphQL, Linux, Git, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4360819006</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d3acbeab-0bf</externalid>
      <Title>Software Engineer- Backend Intern (Summer 2026)</Title>
      <Description><![CDATA[<p>As a Software Engineer Intern on the Platform team, you&#39;ll design and build services that autonomously control satellites, monitor telemetry for anomalies, and provide real-time situational awareness to keep our fleet safe and online. You&#39;ll also be building the core components and services that power the rest of our software organisation, enabling every team to move faster and more reliably.</p>
<p>This role will contribute to both our commercial and US government programs. Internships at Astranis typically last for twelve weeks, and are hourly roles designed for students who are currently enrolled at a four-year university. If you have already graduated from a four-year university, please apply to be an Associate Engineer.</p>
<p>Role:</p>
<ul>
<li>Design and build high-performance, reliable, mission-critical software that is used to send commands to space</li>
<li>Take full ownership of features, working across backend and infrastructure</li>
<li>Collaborate with multidisciplinary teams to define software requirements, architectures, and designs</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Currently pursuing a B.S. or M.S. in Computer Science, or equivalent degree</li>
<li>Strong proficiency in Python</li>
<li>Experience and understanding of databases (Postgres, etc)</li>
<li>Experience and understanding of pub/sub and streaming systems (RabbitMQ, Flink, etc)</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Experience with Kubernetes</li>
<li>Experience building fleet management systems</li>
</ul>
<p>Base pay for this position is $29.00 per hour.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Postgres, RabbitMQ, Flink, Kubernetes, fleet management systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4648080006</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>955a1285-ace</externalid>
      <Title>Staff Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Forward Deployed Engineer to join our team in San Francisco. As a Forward Deployed Engineer, you will work directly with enterprise customers to help them deploy, scale, and operationalize their AI workloads on Fal. This is a highly technical, customer-facing role where you&#39;ll act as the bridge between Sales, Product and Infrastructure teams.</p>
<p>You&#39;ll join customer calls, deeply understand their architecture and needs, and translate those into actionable implementation plans and product requirements. You will be responsible for unblocking customer deployments, accelerating onboarding, and ensuring enterprise accounts successfully reach production fast.</p>
<p>This is a role for someone who loves solving real-world engineering problems and wants direct ownership over outcomes that drive revenue and product growth.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Join enterprise onboarding calls and act as the technical owner for deployments</li>
<li>Help customers integrate their models into Fal Serverless (APIs, scaling, observability, deployment workflows)</li>
<li>Debug customer issues end-to-end across frontend, backend, and infra layers</li>
<li>Translate customer feedback into clear product specs, tasks, and engineering priorities</li>
<li>Work closely with Product + Infra to ensure enterprise needs are shipped into the platform</li>
<li>Build custom proofs-of-concept or lightweight integrations to unblock adoption</li>
<li>Identify repeatable patterns across customers and turn them into reusable product features</li>
<li>Improve internal tooling, onboarding flows, and docs based on real customer pain points</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong engineering background (Proficiency with TypeScript, Python, Postgres, and Next.js)</li>
<li>Experience working with customers in a technical capacity (Solutions Engineer, Forward Deployed Engineer, DevRel Engineer, or similar)</li>
<li>Comfortable jumping into ambiguous customer problems and finding solutions fast</li>
<li>Ability to understand complex systems and communicate clearly with both technical and non-technical stakeholders</li>
<li>Strong written communication skills (turning customer conversations into actionable specs/tasks)</li>
<li>Experience working across APIs, infrastructure, and cloud environments</li>
<li>High ownership mentality: you take responsibility for customer success end-to-end</li>
<li>Comfort operating in a fast-moving, low-process environment</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with serverless platforms, infra products, or developer platforms</li>
<li>Familiarity with observability tooling (logs, metrics, tracing)</li>
<li>Background in distributed systems, Kubernetes, or cloud-native deployments</li>
<li>Experience with AI/ML workloads in production</li>
<li>Experience writing documentation, onboarding guides, or customer playbooks</li>
</ul>
<p><strong>Why Join</strong></p>
<ul>
<li>Own the success of Fal&#39;s most important enterprise deployments</li>
<li>Work on a product used at massive scale with real production workloads</li>
<li>Direct influence over product roadmap through customer feedback loops</li>
<li>High autonomy and visibility across Product, Infra, and Sales leadership</li>
<li>Be a foundational member of a rapidly growing product vertical</li>
<li>Work at one of the fastest-growing AI startups, helping shape a new category</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Interesting and challenging work</li>
<li>Competitive salary and equity</li>
<li>A lot of learning and growth opportunities</li>
<li>We offer visa sponsorship and will help you relocate to San Francisco.</li>
<li>Health, dental, and vision insurance (US)</li>
<li>Regular team events and offsite</li>
</ul>
<p><strong>Compensation</strong></p>
<p>$150,000 - $230,000 + equity + comprehensive benefits package</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $230,000</Salaryrange>
      <Skills>TypeScript, Python, Postgres, Next.js, Serverless platforms, Infra products, Developer platforms, Observability tooling, Distributed systems, Kubernetes, Cloud-native deployments, AI/ML workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal is an AI startup that builds infrastructure for AI inference. It has reached a $4.5B valuation and has a lean team of ~70 employees.</Employerdescription>
      <Employerwebsite>https://www.fal.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4129387009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>40c3120b-89c</externalid>
      <Title>Database Reliability Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented Database Reliability Engineer to join our team. As a Database Reliability Engineer, you will be responsible for ensuring the rock-solid reliability of our existing RDS footprint. This includes architecting automated strategies for seamless, multi-version upgrades and proactive performance tuning to minimize downtime across hundreds of instances.  Our ideal candidate will have extensive experience working with Postgres and a passion for running stateful workloads natively on Kubernetes. They will also have a natural &quot;reluctance for manual implementation&quot; and believe that infrastructure should be managed entirely via code, using Terraform to provision the foundation and custom APIs to handle the orchestration.  The successful candidate will be excited by the challenge of &quot;multi-everything&quot;,multi-tenant, multi-region, and multi-cloud,while ensuring rigorous data integrity and mobility. They will also believe security is paramount and focus on building deep observability (Prometheus/Grafana/OpenTelemetry/Humio) and automated guardrails so the fleet is secure by design without requiring manual intervention.  As a Database Reliability Engineer, you will work closely with our Data teams to deliver meaningful and impactful insights to both the business and our customers. You will also have the opportunity to contribute to the development of our data layer and help shape the future of our technology stack.  In return for your hard work and dedication, we offer a competitive salary and benefits package, including 25 days holiday, an extra day&#39;s holiday for your birthday, and a generous family-friendly policy. We also offer a range of training and development opportunities to help you grow your skills and advance your career.  If you are passionate about working with databases and are looking for a challenging and rewarding role, please apply now.  <strong>Responsibilities</strong>  <em> Modernize and Scale the RDS Fleet </em> Architect Cross-Cloud Portability <em> Evolve Observability &amp; Monitoring </em> Support Replication &amp; Mobility <em> Fortify Business Continuity (BCP)  <strong>Requirements</strong>  </em> PostgreSQL &amp; Kubernetes Expert <em> Systems Thinker </em> Distributed Systems Enthusiast <em> A Security &amp; Observability Mindset </em> Engineering via Code  <strong>Nice to Have</strong>  <em> Experience with Terraform and custom APIs </em> Familiarity with Prometheus, Grafana, OpenTelemetry, and Humio <em> Knowledge of cloud-native patterns and provider-agnostic deployment </em> Experience with data streaming and &quot;Zero-Downtime&quot; migration strategies <em> Familiarity with Business Continuity Planning and Disaster Recovery strategies  <strong>What We Offer</strong>  </em> Competitive salary and benefits package <em> 25 days holiday </em> An extra day&#39;s holiday for your birthday <em> Generous family-friendly policy </em> Range of training and development opportunities * Opportunity to contribute to the development of our data layer and shape the future of our technology stack  <strong>How to Apply</strong>  If you are passionate about working with databases and are looking for a challenging and rewarding role, please apply now. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and benefits package&quot;,   &quot;salaryMin&quot;: 95000,   &quot;salaryMax&quot;: 120000,   &quot;salaryCurrency&quot;: &quot;GBP&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>PostgreSQL, Kubernetes, Terraform, Custom APIs, Prometheus, Grafana, OpenTelemetry, Humio, Cloud-native patterns, Provider-agnostic deployment, Data streaming, Zero-Downtime migration strategies, Business Continuity Planning, Disaster Recovery strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that provides banking services to customers in the UK. It has over 3,000 employees across its offices in London, Southampton, Cardiff, and Manchester.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/CCC0F3F287</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>48c63c78-c18</externalid>
      <Title>Sr. Backend JS Developer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a highly skilled Sr. Backend JS Developer to join our development team. As a senior member of our team, you will play a key role in the architecture and design of our entire platform, built from the ground up to be flexible, modular, and reusable. You will be responsible for building and maintaining this solution using TypeScript in a NodeJS environment, adhering to clean code standards and comprehensive documentation practices.</p>
<p>As a Sr. Backend JS Developer, you will work closely with the development and product teams, participate in daily scrums and weekly sprint meetings, and actively build the APIs (TypeScript/NodeJS). You will also write and execute tests, peer review code from other members of the team, and support the planning, feature estimation, and scoping of development work.</p>
<p>We believe good developers need clear requirements, but also focused time and space to do their best work. Accordingly, you will vocalize when you need to clarify uncertain requirements, help find solutions to translate our designers&#39; specifications into working features, and determine what work setting works best for you to get the job done.</p>
<p>To support our objectives, we run Agile ceremonies, plan and scope our work as a group, and believe in a continuous deployment philosophy. Our work is about creating value for our end-users, and you are a key part of bringing that experience to life via seamless integrations happening in the background.</p>
<p>If you meet the requirements of this unique opportunity, and want to impact our mission Health for all, Hunger for none, we encourage you to apply now. Be part of something bigger. Be you. Be Bayer.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120-170k</Salaryrange>
      <Skills>JavaScript, TypeScript, NodeJS, PostgreSQL, REST, GraphQL, event-based systems, AWS-hosted web applications, Redis, OAuth 2.0 protocol, Agile Delivery model, Continuous Deployment model, Docker, container orchestration technologies, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Life Sciences</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976999202</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>386f52aa-89a</externalid>
      <Title>Software Engineer (ML Projects)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled software engineer to join our ML Projects team. As a software engineer on this team, you will work with other engineers and data scientists to design, implement, and maintain features that make use of machine learning models under the hood. This could mean anything from creating a brand new ML-powered feature from scratch to seamlessly integrating a new model into our core banking platform.</p>
<p>You will have the autonomy to shape your own path, identify challenges, and collaborate with colleagues across teams to deliver impactful solutions across a range of technologies. We believe in empowering our engineers to take ownership and drive solutions from ideation to launch.</p>
<p>Our main tech stack includes Python, Java, JavaScript, Postgres, SQL, AWS, GCP, TeamCity, Terraform, Prometheus, and Grafana. If you have built and deployed complex Python applications or have hands-on experience with generative AI and LLMs, we would be especially keen to talk.</p>
<p>In general, you can expect the below, following a chat with one of our Talent Team:</p>
<ul>
<li>Stage 1 - 45 mins with one of the team</li>
<li>Stage 2 - Take-home challenge</li>
<li>Stage 3 - 90 mins technical interview with two team members</li>
<li>Stage 4 - 45 min final with an executive</li>
</ul>
<p>Benefits include 33 days holiday, an extra day&#39;s holiday for your birthday, annual leave increased with length of service, 16 hours paid volunteering time a year, salary sacrifice, company-enhanced pension scheme, life insurance at 4x your salary, and group income protection.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, JavaScript, Postgres, SQL, AWS, GCP, TeamCity, Terraform, Prometheus, Grafana, Generative AI, LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that provides banking services to individuals and businesses. It has over 3,000 employees across its offices in London, Southampton, Cardiff, and Manchester.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/5D5584B013</Applyto>
      <Location>Southampton</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>ca4294db-daa</externalid>
      <Title>Senior Software Engineer - Identity &amp; FinCrime</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Customer Identity &amp; Fincrime team. As a senior engineer, you will be responsible for designing, developing, and maintaining critical systems that verify identities, protect accounts, detect fraud, and prevent money laundering. You will work with cutting-edge technologies in a fast-paced environment, offering a unique opportunity to learn and grow within the exciting world of FinTech.</p>
<p>Our ideal candidate will have a strong background in software engineering, with experience in Java, AWS, and Postgres. You will be able to work independently and as part of a team, with excellent communication and problem-solving skills. You will also be passionate about delivering high-quality software and committed to continuous learning and improvement.</p>
<p>In this role, you will have the opportunity to:</p>
<ul>
<li>Design and develop new features and systems to improve the security and customer experience of our customers</li>
<li>Collaborate with cross-functional teams to identify and prioritize project requirements</li>
<li>Work with our DevOps team to ensure smooth deployment and operation of our systems</li>
<li>Participate in code reviews and contribute to the improvement of our codebase</li>
<li>Stay up-to-date with industry trends and emerging technologies</li>
</ul>
<p>If you&#39;re a motivated and experienced software engineer looking for a new challenge, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, AWS, Postgres, Cloud-native, Microservice-based architecture, Kubernetes, TeamCity, Terraform, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that offers a range of financial services to individuals and businesses. It has over 3,000 employees across multiple locations in the UK.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/04BC62B8F7</Applyto>
      <Location>Manchester</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d34ee930-f5c</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Cloud Platform Engineer to join our team. As a Cloud Platform Engineer, you will be responsible for designing and implementing cloud-native database infrastructure using Terraform /Ansible to provision managed DB instances in multi-clouds (RDS/Azure DB /Cloud SQL) and self-managed clusters.</p>
<p>You will also be responsible for automating Configuration Management, security hardening, and patching of database instances across all environments. Automate workflows to reduce manual effort and improve reliability.</p>
<p>In addition, you will develop internal tools and scripts (Python/Bash) to enable production support teams to manage their own database instances and environments safely. Develop scripts for routine operational tasks like backups, health checks, etc.</p>
<p>You will integrate advanced observability platforms (Dynatrace, CloudWatch) with AIOps tools to establish SLOs and train models for anomaly detection and proactive forecasting of database degradation like predicting slow queries or imminent connection pool exhaustion).</p>
<p>You will design, deploy, and govern AI-powered agents (using Azure Copilot /AWS Bedrock) to achieve autonomous self-healing capabilities and automated resource management.</p>
<p>You will implement advanced monitoring (CloudWatch, Dynatrace) for key database metrics (SLIs/SLOs) like latency, throughput, error rates, and connection pools. Develop and train predictive ML models to analyze historical telemetry and forecast potential system outages or performance bottlenecks and configure proactive monitoring and alerting for critical services.</p>
<p>You will respond to alerts and create self-healing actions based on alerts.</p>
<p>You will design and implement cross-region/multi-AZ replication, automated failover strategies, and point-in-time recovery (PITR) procedures for mission-critical databases. Disaster recovery planning and DR drills.</p>
<p>You will execute backup strategies and validate recovery procedures using Rubrik and Perform restores as needed.</p>
<p>You will work closely with application operations / Production support teams to troubleshoot issues on database layer (performance, locks, schema) and the platform layer (multi-cloud /middleware /network, resource limits) to find the root causes.</p>
<p>You will lead incident response and root cause analysis (RCA) for database outages, performance degradations, and data integrity issues. Collaborate with DBAs and application teams for root cause analysis.</p>
<p>You will implement AI tools to perform real-time Root Cause Analysis (RCA), correlate complex event data (logs, metrics) and auto-generate runbooks.</p>
<p>You will define and automate scaling strategies (read replicas, sharding, auto-scaling) based on predicted load and business growth. Provide input for capacity planning and resource optimization.</p>
<p>You will implement cost management policies, including rightsizing instances, managing storage tiers, and defining lifecycle rules for backups and snapshots.</p>
<p>You will proactively analyze query performance, index usage, and database configuration, making and automating changes to optimize throughput and reduce latency. Support DBA teams in performance tuning initiatives.</p>
<p>You will implement robust secrets management solutions (AWS Secrets Manager, HashiCorp Vault) for database credentials, ensuring applications retrieve secrets securely at runtime.</p>
<p>You will define and enforce least-privilege access policies (IAM roles, service accounts) for databases.</p>
<p>You will implement encryption and data masking policies as directed.</p>
<p>You will manage security and compliance by utilizing AI agents to detect configuration drift and auto-generate compliant updates for IAM, network, and security policies.</p>
<p>You will apply patches and perform upgrades in coordination with DBA teams. Validate post-upgrade functionality and compliance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Oracle, DB2, MSSQL, Snowflake, PostgreSQL, MySQL, Terraform, Ansible, Python, Bash, Dynatrace, CloudWatch, Azure Copilot, AWS Bedrock, Rubrik, AI/ML, Cloud Native, Database Administration, Configuration Management, Security Hardening, Patching, Observability Platforms, AIOps Tools, Autonomous Self-Healing, Resource Management, Advanced Monitoring, Predictive ML Models, Proactive Monitoring, Alerting, Cross-Region/Multi-AZ Replication, Automated Failover Strategies, Point-in-Time Recovery, Disaster Recovery Planning, DR Drills, Backup Strategies, Recovery Procedures, Application Operations, Production Support Teams, Root Cause Analysis, Incident Response, AI Tools, Runbooks, Scaling Strategies, Capacity Planning, Resource Optimization, Cost Management Policies, Rightsizing Instances, Storage Tiers, Lifecycle Rules, Query Performance, Index Usage, Database Configuration, Secrets Management Solutions, Least-Privilege Access Policies, Encryption, Data Masking Policies, Security Compliance, Configuration Drift, Compliant Updates</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/aNTGp9AN6h4GPQ6Vrak2GZ/hybrid-cloud-platform-engineer-in-pune-at-capgemini</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>531dc584-ba0</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>Do you ever have the urge to do things better than the last time? We do. And it&#39;s this urge that drives us every day. Our environment of discovery and innovation means we&#39;re able to create deep and valuable relationships with our clients to create real change for them and their industries. It&#39;s what got us here – and it&#39;s what will make our future. At Quantexa, you&#39;ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions.</p>
<p>You&#39;ll be joining one of our DevOps teams in our R&amp;D department working on the Quantexa Cloud Platform and accompanying solutions, including platforms supporting data-intensive and AI-driven workloads. The platform is comprised of a landscape of low-maintenance, on-demand, and highly secure environments that host our software for customers and partners. These environments also support a wide range of internal use cases, underpinning the work of our R&amp;D teams.</p>
<p>As a Senior DevOps Engineer, you will:</p>
<p>Contribute to the evolution and improvement of our cloud-based platform, with a strong focus on availability, resilience, performance, and security. Take ownership of significant technical problems and initiatives, driving them through to delivery with a high degree of autonomy. Enhance our automation practices, helping reduce operational toil and improve the consistency and reliability of our platform, including the use of modern tooling and AI-assisted approaches where appropriate. Collaborate closely with software engineering teams to strengthen our CI/CD pipelines and optimise build, test, and deployment workflows, with an eye on improving overall developer productivity. Support the development of cloud-based product capabilities that customers can integrate into their own DevOps processes. Contribute to technical discussions, provide guidance on best practices, and help shape engineering standards within the team. Offer informal mentoring and knowledge-sharing to engineers, supporting the growth of the wider DevOps community.</p>
<p>This role focuses on deep hands-on technical expertise and the ability to lead complex workstreams, while stopping short of the architectural ownership and broader technical leadership responsibilities of a Lead Engineer.</p>
<p>Our Stack Includes: Kubernetes, Docker, Istio GitOps / DevOps tooling: ArgoCD, Jenkins, GitHub Actions Scripting &amp; Automation: Bash, Python, Groovy, Golang IaC &amp; Infrastructure Management: Terraform, Ansible, Packer, CasC Provisioning Frameworks: Elasticsearch, Spark, Hadoop, Airflow, PostgreSQL, etc. Observability: Fluentd, Prometheus, Grafana, Alertmanager Public Cloud: Primarily GCP and Azure, with some AWS</p>
<p>We are looking for candidates who: Take pride in designing, building, and delivering high-quality, well-engineered solutions to complex problems. Think holistically, ensuring solutions integrate effectively into large-scale distributed systems. Bring strong hands-on experience across several aspects of our cloud and DevOps stack. Have solid experience with programming/scripting/automation. Demonstrate a strong understanding of information security principles. Have experience operating and supporting cloud-native platforms in production environments. Are comfortable working autonomously, leading technical workstreams, and driving improvements. Enjoy sharing knowledge and supporting the development of other engineers.</p>
<p>Experience in the following would be beneficial: Infrastructure management and general Linux administration. Operating microservice-based architectures (scaling, upgrading, traffic management, deployment strategies). Software build, release engineering, and CI/CD pipeline enhancement. Exposure to a broad selection of the technologies listed in our tech stack. Exposure to platforms or tooling that support AI/ML workflows, data-intensive pipelines, or intelligent automation.</p>
<p>Why join Quantexa? Our perks and quirks. What makes you Q will help you to realize your full potential, flourish, and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>
<p>We offer: Competitive salary and Company Bonus Flexible working hours in a hybrid workplace &amp; free access to global WeWork locations &amp; events Pension Scheme with a company contribution of 6% (if you contribute 3%) 25 days annual leave (with the option to buy up to 5 days) + birthday off! Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period Family: Enhanced Maternity, Paternity, Adoption, or Shared Parental Leave Private Healthcare with AXA EAP, Well-being Days, Gym Discounts Free Calm App Subscription Workplace Nursery Scheme Team&#39;s Social Budget &amp; Company-wide Summer &amp; Winter Parties Tech &amp; Cycle-to-Work Schemes Volunteer Day off Dog-friendly Offices</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Docker, Istio, GitOps, DevOps, tooling, ArgoCD, Jenkins, GitHub Actions, Scripting, Automation, Bash, Python, Groovy, Golang, IaC, Infrastructure Management, Terraform, Ansible, Packer, CasC, Provisioning Frameworks, Elasticsearch, Spark, Hadoop, Airflow, PostgreSQL, Observability, Fluentd, Prometheus, Grafana, Alertmanager, Public Cloud, GCP, Azure, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/quantexa.com.png</Employerlogo>
      <Employerdescription>Quantexa is a company that creates deep and valuable relationships with clients to create real change for them and their industries.</Employerdescription>
      <Employerwebsite>https://www.quantexa.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/5FmBVWpa875z7Aah52FzVu/hybrid-senior-devops-engineer-in-london-at-quantexa</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8cceb431-49c</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As an Engineering Manager on the Infrastructure team at Cursor, you&#39;ll lead the team that owns the foundational cloud, networking, storage, and compute layer that every service runs on: network foundations, container orchestration, edge and security infrastructure, data storage systems, and the compute runtimes that power production.</p>
<p>Cursor is one of the fastest-growing developer tools in the world, and you&#39;ll drive the cost management, regional deployment strategy, and infrastructure unification that make that growth possible. When your team&#39;s systems work well, every team is more productive, every product surface is more reliable, and Cursor can expand to serve developers everywhere.</p>
<p>You&#39;ll set technical direction, write and review code, and lead a team of strong infrastructure engineers, balancing hands-on contribution with growing your team&#39;s impact.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Owning Kubernetes and cluster foundations: building and operating production clusters with proper service mesh, scaling, and ingress that teams can confidently deploy to.</li>
</ul>
<ul>
<li>Designing the geo-deployment architecture: building a replicable, robust process for deploying geo-replicated services across cloud regions and providers.</li>
</ul>
<ul>
<li>Building edge and security infrastructure: designing the networking and security layer at the edge to protect against abuse, manage rate limiting, and optimize traffic routing.</li>
</ul>
<ul>
<li>Owning data storage strategy: leading the team&#39;s work on Postgres, OLAP systems, and caching layers, ensuring our storage infrastructure is reliable, performant, and scales with the product.</li>
</ul>
<ul>
<li>Owning cost management and optimization: building attribution systems, identifying waste, and ensuring we&#39;re making smart tradeoffs between cost and reliability across all cloud spend.</li>
</ul>
<ul>
<li>Unifying the compute platform: defining a single, opinionated container orchestration strategy so every team gets consistent, reliable deployments out of the box.</li>
</ul>
<ul>
<li>Hiring and growing the team: sourcing, interviewing, and closing top infrastructure talent, while developing your engineers through coaching, mentorship, and high-leverage project assignments.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have led engineering teams building and operating production infrastructure or platform systems at scale.</li>
</ul>
<ul>
<li>You have deep experience with AWS (or comparable cloud providers), especially VPC networking, EKS/K8s, and IAM/account management.</li>
</ul>
<ul>
<li>You&#39;ve built and operated production Kubernetes clusters at scale, including service mesh, autoscaling, and multi-region deployments.</li>
</ul>
<ul>
<li>You have strong opinions on databases, storage engines, caching, and schema design, and understand the tradeoffs between performance, consistency, and cost.</li>
</ul>
<ul>
<li>You understand edge networking, CDN/WAF architectures, and traffic management at the infrastructure level.</li>
</ul>
<ul>
<li>You care about infrastructure-as-code, reproducibility, and making it easy for other teams to self-serve reliable infrastructure.</li>
</ul>
<ul>
<li>Experience with cost optimization at scale, infrastructure migration/unification, or data storage systems (Postgres, ClickHouse, OLAP) is a plus.</li>
</ul>
<p><strong>Salary</strong></p>
<p>$150,000 - $200,000 per year</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>AWS (or comparable cloud providers)</li>
<li>VPC networking</li>
<li>EKS/K8s</li>
<li>IAM/account management</li>
<li>Kubernetes</li>
<li>Service mesh</li>
<li>Autoscaling</li>
<li>Multi-region deployments</li>
<li>Databases</li>
<li>Storage engines</li>
<li>Caching</li>
<li>Schema design</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Cost optimization at scale</li>
<li>Infrastructure migration/unification</li>
<li>Data storage systems (Postgres, ClickHouse, OLAP)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 - $200,000 per year</Salaryrange>
      <Skills>AWS, VPC networking, EKS/K8s, IAM/account management, Kubernetes, Service mesh, Autoscaling, Multi-region deployments, Databases, Storage engines, Caching, Schema design, Cost optimization at scale, Infrastructure migration/unification, Data storage systems (Postgres, ClickHouse, OLAP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a developer tools company, one of the fastest-growing in the world.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/engineering-manager-infrastructure</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b243c12b-190</externalid>
      <Title>Data Engineer- Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are looking for a talented Data Engineer to join the Chief Data Office. This is a unique opportunity to be the first data engineer on the team, responsible for establishing the infrastructure and best practices for onboarding and maintaining data, as well as delivering insights to our stakeholders.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable data pipelines and systems to support data integration and analytics.</li>
<li>Establish best practices for data management, including data quality, data governance, and data security.</li>
<li>Collaborate with stakeholders to understand data requirements and deliver actionable insights.</li>
<li>Partner with adjacent data engineering teams to leverage and enhance existing data infrastructure.</li>
<li>Implement and optimize data storage solutions to ensure efficient data retrieval and processing.</li>
<li>Develop and maintain documentation for data engineering processes and systems.</li>
<li>Lead and mentor junior data engineers and analysts, fostering a culture of continuous learning and improvement.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Proven experience as a Data Engineer or in a similar role.</li>
<li>Strong knowledge of data engineering concepts, including ETL processes, data warehousing, and data modelling.</li>
<li>Proficiency in programming languages such as Python, SQL, PostgreSQL, and Java.</li>
<li>Experience with big data technologies such as Hadoop, Spark, and Snowflake.</li>
<li>Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.</li>
<li>Excellent problem-solving skills and attention to detail.</li>
<li>Strong communication and collaboration skills.</li>
<li>Experience with batch processing and API integration.</li>
<li>Experience or eagerness to learn how to maintain and serve data for generative AI applications.</li>
<li>Familiarity with RAG (Retrieval-Augmented Generation) and vector databases.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and bonus</Salaryrange>
      <Skills>data engineering, ETL processes, data warehousing, data modelling, Python, SQL, PostgreSQL, Java, Hadoop, Spark, Snowflake, AWS, Azure, Google Cloud, batch processing, API integration, generative AI, RAG, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a multinational investment management corporation that provides a range of investment, risk management, and technology services to institutional and retail clients worldwide.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nRA8d7RLJFd4VVphMbi2n2/data-engineer--associate-in-budapest-at-blackrock</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e3d751c0-65b</externalid>
      <Title>Associate, Full-Stack Software Engineer (.Net, Python &amp; React)</Title>
      <Description><![CDATA[<p>At BlackRock, technology is central to our mission, and our team continues to drive innovation across the industry. We value curiosity, collaboration, and a willingness to experiment to tackle complex problems. This position is within Preqin, a division of BlackRock that plays an essential role in transforming private markets data and technology for clients worldwide. The Preqin Data Management team is responsible for building and maintaining software to manage data contributed by researchers, automated processes and direct contributions by 3rd parties.</p>
<p>As an Associate, Software Engineer on the Data Management team, you&#39;ll join an engineering pod responsible for managing contacts data. Your work will focus on building robust, scalable systems that align with business objectives. You&#39;ll deliver high-quality solutions by applying strong data expertise, product insight, and effective communication skills. Collaboration with other engineers, product managers, and data owners will be key as you help design, develop, and launch new features and influence our technical strategy.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Designing, implementing, and maintaining robust systems for data management.</li>
</ul>
<ul>
<li>Collaborating closely with engineering teams across the organisation to ensure adoption of optimal technical solutions and raising development standards through knowledge sharing and best practice implementation.</li>
</ul>
<ul>
<li>Actively contributing to technical discussions regarding new product directions, data modelling, and architectural decisions to ensure the technology platform remains scalable and adaptable.</li>
</ul>
<p>We are looking for:</p>
<ul>
<li>4+ years&#39; experience in software engineering.</li>
</ul>
<ul>
<li>Strong technical ability across the full stack: C#, Python, FastAPI, React and Typescript are a plus.</li>
</ul>
<ul>
<li>Experience with PostgreSQL and other SQL and NoSQL databases (MongoDB, AWS Aurora, Azure Cosmos DB, MS SQL Server, Cassandra are a plus).</li>
</ul>
<ul>
<li>Experience of working within cloud provider services – Azure or AWS and utilisation of infrastructure as code (Terraform).</li>
</ul>
<ul>
<li>Familiarity with containerisation – Docker and Kubernetes.</li>
</ul>
<ul>
<li>Excellent verbal and written communication and interpersonal skills.</li>
</ul>
<ul>
<li>A &#39;let&#39;s do it&#39; and &#39;challenge accepted&#39; attitude when faced with challenging tasks, and willingness to learn new technologies and ways of working.</li>
</ul>
<p>Our benefits include retirement investment, education reimbursement, comprehensive resources to support physical health and emotional well-being, family support programs, and Flexible Time Off (FTO).</p>
<p>Our hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, Python, FastAPI, React, Typescript, PostgreSQL, MongoDB, AWS Aurora, Azure Cosmos DB, MS SQL Server, Cassandra, Terraform, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/rF3NUqLpdmQwRkQEgaTaEb/associate%2C-full-stack-software-engineer-(.net%2C-python-%26amp%3B-react)-in-london-at-blackrock</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5558189c-8cd</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Software Engineer on the Storage team at Cursor, you&#39;ll own the data layer that underpins every product surface: the databases, caches, and the strategy for how teams provision, query, and scale their data stores.</p>
<p>Millions of developers depend on Cursor every day, and the future of our storage architecture is one of the highest-leverage problems at the company: get it right, and every team ships faster, every product surface gets more reliable, and Cursor can scale to meet explosive demand. You&#39;ll design and execute the path to a robust, multi-database topology built for that growth.</p>
<p><strong>Example projects include...</strong></p>
<ul>
<li>Designing the next-generation data architecture: evolving our storage layer into a partitioned, resilient topology that keeps pace with Cursor&#39;s rapid growth.</li>
</ul>
<ul>
<li>Building query attribution and guardrails: instrumenting every database query by service, catching bad patterns before they hit production, and making it impossible to ship problematic queries without review.</li>
</ul>
<ul>
<li>Defining the &#39;when to use what&#39; strategy for data stores: creating clear guidance and golden pathways so every team picks the right engine for their workload without second-guessing.</li>
</ul>
<ul>
<li>Owning cache infrastructure end-to-end: reliability, capacity planning, and patterns that let product teams move fast without worrying about cache correctness.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have deep experience with relational databases at scale, especially Postgres, MySQL, or similar OLTP systems.</li>
</ul>
<ul>
<li>You&#39;ve tackled database sharding, migration, or decomposition problems in production environments.</li>
</ul>
<ul>
<li>You understand the tradeoffs between different storage engines and can help teams make the right choices for their workloads.</li>
</ul>
<ul>
<li>You care about operational excellence: backups, monitoring, query performance, and capacity planning are things you think about proactively.</li>
</ul>
<ul>
<li>You have strong software engineering fundamentals and enjoy building systems that other engineers depend on.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, MySQL, relational databases, database sharding, migration, decomposition, storage engines, operational excellence, backups, monitoring, query performance, capacity planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology company providing data storage solutions to millions of developers worldwide. The company has a significant presence in the industry with a large user base.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-storage</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>03e33fe6-b6a</externalid>
      <Title>Lead Server Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As one of the largest sports entertainment platforms in the world, EA SPORTS FC is redefining football with genre-leading interactive experiences, connecting a global community of fans to The World&#39;s Game through innovation and unrivaled authenticity.</p>
<p>As a team, we are passionate about creating high-quality games and experiences worldwide. We learn from past experiences and strive for progress. We value team synergy and believe a relaxed working environment can yield better results. That&#39;s why we promote and support maintaining a healthy work-life balance.</p>
<p>As a software engineer, you are an essential part of the game creation process and are involved in the feature design and implementation of the game and live service. You will report to a Technical Director.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain server-side code and ensure robustness.</li>
<li>Responsible for the technical design of architecture/framework, such as underlying services or modules, service encapsulation/framework/DB storage/data caching etc., in the feature development process.</li>
<li>Oversee the server-side tasks and develop tools to ensure the healthy operation of the game server.</li>
<li>Manage projects/tasks of the team related to the server architecture/framework implementation, including planning, estimation, breakdown, coordination, and demonstrate commitment to delivery.</li>
<li>Troubleshoot complex server related technical issues to minimize the occurrence of critical issues and reduce downtime and service interruptions.</li>
<li>Collaborate with team members, stakeholders, operations teams and external partners.</li>
<li>Demonstrate impact through dialog, teamwork, and providing guidance to junior team members.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>At least 8 years&#39; experience in server development</li>
<li>Proficiency in at least one of Java/C++/Go</li>
<li>Knowledge of common networking protocols (e.g. TCP, UDP, HTTP and WebSocket)</li>
<li>Knowledge of relational databases (e.g. MySQL or Postgres), NoSQL databases (e.g. MongoDB), and in-memory data structure store (e.g. Redis)</li>
<li>Knowledge of container and serverless technologies (Docker, Kubernetes)</li>
<li>Experience in development for the Linux platform</li>
<li>Experience in version control software such as Git and Perforce</li>
<li>Excellent debugging capabilities</li>
<li>Experience in at least one shipped large online game development</li>
<li>Nice to be familiar with the Agile/Scrum methodology</li>
<li>Proficient in reading and writing English documents</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C++, Go, TCP, UDP, HTTP, WebSocket, MySQL, Postgres, MongoDB, Redis, Docker, Kubernetes, Linux, Git, Perforce</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of popular titles including EA SPORTS FC.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/213185-Lead-Server-Engineer/213185</Applyto>
      <Location>Shanghai</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8e8d6530-012</externalid>
      <Title>Senior Database Engineer</Title>
      <Description><![CDATA[<p>Amazon Leo is seeking an experienced Senior Database Engineer to lead the design, implementation, administration, and optimization of our Microsoft SQL Server and PostgreSQL database environments supporting Siemens Teamcenter PLM and related platform services.</p>
<p>As a Senior Database Engineer, you will collaborate across PLM Application, Infrastructure, and Integration teams to improve database performance, reliability, and scalability. You will ensure the database architecture is performant, highly available, and observable as the platform scales.</p>
<p>Key responsibilities include:</p>
<p>Supporting MS SQL Server and PostgreSQL database solutions for Teamcenter PLM and adjacent platform services across development, test, and production environments; Performing day-to-day DBA responsibilities including database provisioning, user and role management, access control, capacity planning, patching, and health monitoring across both MS SQL Server and PostgreSQL environments; Developing and implementing database migration strategies for Teamcenter upgrades, PostgreSQL adoption initiatives, and performance optimization as the platform scales to support additional product lines and integrations; Designing and optimizing database schemas, stored procedures, functions, and queries across MS SQL Server and PostgreSQL, ensuring consistency and performance across both platforms; Establishing best practices for MS SQL Server and PostgreSQL administration, backup/recovery, replication, and high availability in the context of Teamcenter&#39;s data model and enterprise integrations.</p>
<p>A day in the life includes analyzing query performance metrics, designing schema improvements to enhance system reliability, or working with application teams to troubleshoot database connectivity issues. You&#39;ll participate in infrastructure planning sessions, mentor colleagues on database best practices, and contribute to documentation that helps the broader organization understand our database strategy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Microsoft SQL Server, PostgreSQL, Database Administration, Database Engineering, Teamcenter PLM, Siemens Teamcenter Data Model Architecture, CI/CD Pipelines, Azure Database Administrator Associate, PostgreSQL Certification, AWS Database Services, Database Monitoring and Observability Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Amazon Leo Builder Product Services</Employername>
      <Employerlogo>https://logos.yubhub.co/amazon.jobs.png</Employerlogo>
      <Employerdescription>Amazon Leo is a satellite broadband network designed to deliver fast, reliable internet to customers and communities worldwide.</Employerdescription>
      <Employerwebsite>https://amazon.jobs</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://amazon.jobs/en/jobs/10402499/sr-database-engineer-amazon-leo</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5c6cc661-c1b</externalid>
      <Title>Software Engineer I</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>The Server Engineer will report to the Technical Director.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and run a fast, scalable, highly available game service all the way from conception to delivery to live service operations</li>
<li>Work with designers, client engineering, and production teams to achieve gameplay goals</li>
<li>Implement security best practices and original techniques to keep user data secure and prevent cheating</li>
<li>Create and run automated testing, readiness testing, and deployment plans</li>
<li>Monitor the performance and costs of the server infrastructure to improve our game</li>
</ul>
<p>Qualifications:</p>
<p>We encourage you to apply if you can meet most of the requirements and are comfortable opening a dialogue to be considered.</p>
<ul>
<li>4+ years development of scalable back-end services</li>
<li>BS degree in Computer Science or equivalent work experience</li>
<li>Proficiency in Java or similar programming languages</li>
<li>Experience with Cloud services like Amazon Web Services or Google Cloud Platform</li>
<li>Experience with Database Design and usage of large datasets in both relational (MySQL, Postgres) and NoSQL (Couchbase, DynamoDB) environments</li>
</ul>
<p>Bonus:</p>
<ul>
<li>3+ years of experience developing games using cloud services like AWS, Azure, Google Cloud Platform, or similar</li>
<li>Proficient in technical planning, solution research, proposal, and implementation</li>
<li>Background using metrics and analytics to determine the quality or priority</li>
<li>Comfortable working across client and server codebases</li>
<li>Familiar with profiling, optimizing, and debugging scalable data systems</li>
<li>Passion for making and playing games</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Cloud services, Database Design, MySQL, Postgres, Couchbase, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher with a portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-I/213816</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>81ba1c4e-881</externalid>
      <Title>(Senior)Server Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As one of the largest sports entertainment platforms in the world, EA SPORTS FC is redefining football with genre-leading interactive experiences, connecting a global community of fans to The World&#39;s Game through innovation and unrivaled authenticity.</p>
<p>With more opportunity than ever to design, innovate and create new, immersive experiences that bring joy, inclusivity, and connection to fans everywhere, we invite you to join our team as we pioneer the future of football fandom.</p>
<p>As a software engineer, you will play a critical role in the system architecture design process. You will be deeply involved in the technical design and implementation of foundational modules and core services, and report directly to the Technical Director.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain server-side code and ensure robustness.</li>
<li>Responsible for the technical design of architecture/framework, such as underlying services or modules, service encapsulation/framework/DB storage/data caching etc., in the feature development process.</li>
<li>Oversee the server-side tasks and develop tools to ensure the healthy operation of the game server.</li>
<li>Manage projects/tasks of the team related to server feature implementation, including planning, coordination to delivery.</li>
<li>Troubleshoot complex server related technical issues to minimize the occurrence of critical issues and reduce downtime and service interruptions.</li>
<li>Collaborate with team members, stakeholders, operations teams and external partners.</li>
<li>Demonstrate impact through dialog, teamwork, and providing guidance to junior team members.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>At least 6 years&#39; experience in server development.</li>
<li>Proficiency in at least one of Java/C++/Go.</li>
<li>Knowledge of common networking protocols (e.g. TCP, UDP, HTTP and Websocket).</li>
<li>Knowledge of relational databases (e.g. MySQL or Postgres), NoSQL databases (eg. MongoDB), and in-memory data structure store (e.g. Redis).</li>
<li>Knowledge of container and serverless technologies (Docker, Kubernetes).</li>
<li>Experience in development for the Linux platform.</li>
<li>Experience in version control software such as Git and Perforce.</li>
<li>Excellent debugging capabilities.</li>
<li>Experience in at least one shipped large online game development.</li>
<li>Nice to be familiar with the Agile/Scrum methodology.</li>
<li>Proficient in reading and writing English documents.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C++, Go, TCP, UDP, HTTP, Websocket, MySQL, Postgres, MongoDB, Redis, Docker, Kubernetes, Linux, Git, Perforce</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of popular titles and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/213641-Server-Software-Engineer-III/213641</Applyto>
      <Location>Shanghai</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d351a09a-ab7</externalid>
      <Title>Senior Product Engineer, Product Foundry</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Product Engineer to join our Product Foundry team. As a key member of our team, you will play a critical role in driving innovation and disruption in the software industry. Your expertise will help us launch high-risk, full-stack initiatives that define and establish entirely new product categories.</p>
<p>In this role, you will work closely with our technical founders to design and develop new products, features, and services that meet the needs of our users. You will also collaborate with our cross-functional teams to ensure that our products are aligned with our overall business strategy.</p>
<p>As a Senior Product Engineer, you will have the opportunity to work on a wide range of projects, from building viable alternatives to popular SaaS vendors to creating full principal capabilities within third-party services. You will also have the chance to use Replit fully within various chat, agentic, and commerce environments.</p>
<p>If you&#39;re passionate about innovation, disruption, and pushing the boundaries of what&#39;s possible, we want to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $290K</Salaryrange>
      <Skills>software engineering experience, full agentic software development stack, strong track record leading complex projects, experience building and operating platform systems, strong product judgment, TypeScript, React, Node.js, Postgres</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language, with millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://replit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/878d844f-f9c2-481d-bb6f-0578a2fe42af</Applyto>
      <Location>Foster City, CA (Hybrid) In office M,W,F</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6e59ba7d-c56</externalid>
      <Title>Software Engineer, Safeguards Foundations (Internal Tooling)</Title>
      <Description><![CDATA[<p>We are seeking a software engineer to join our Safeguards Foundations team. As a member of this team, you will design, build, and maintain internal review and enforcement tooling used by Safeguards analysts. This includes case queues, content review surfaces, decision/audit logging, and account-actioning workflows. You will work closely with Trust &amp; Safety operations, policy, and detection-engineering teams to turn messy operational workflows into well-designed, durable software.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain internal review and enforcement tooling used by Safeguards analysts</li>
<li>Understand user workflows and establish tooling for well processes that may be distributed across a number of tools and UIs</li>
<li>Develop the &#39;base layer&#39; of reusable APIs, data storage, and backend services that let new review workflows be stood up quickly and safely</li>
<li>Partner with operations and policy teams to understand reviewer pain points, then translate them into clear product improvements that reduce handling time and decision error</li>
<li>Integrate tooling with upstream detection systems and downstream enforcement infrastructure so that flagged behaviour flows cleanly from signal → human review → action</li>
<li>Build in the guardrails that sensitive internal tools require: granular permissions, audit trails, data-access controls, and reviewer wellbeing features (e.g. content blurring, exposure limits)</li>
<li>Instrument the tools you ship , surfacing metrics on queue health, reviewer throughput, and decision quality so the team can see what&#39;s working</li>
<li>Contribute to the Foundations team&#39;s shared platform and on-call responsibilities</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience as a software engineer, with meaningful time spent building internal tools, operations platforms, or back-office products</li>
<li>Comfortable using agentic coding tools (e.g. Claude Code) as a core part of your workflow, and can direct them to ship well-tested, production-quality software at a high cadence without lowering the bar</li>
<li>Take a product-minded approach to internal users: you work with the people using your tools, watch where they struggle, and fix it</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Communicate clearly with non-engineering stakeholders and can explain technical trade-offs to operations and policy partners</li>
<li>Care about the societal impacts of your work and want to apply your engineering skills directly to AI safety</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Experience building tooling in a trust &amp; safety, content moderation, fraud, integrity, or risk-operations setting</li>
<li>Experience designing case-management or workflow systems (queues, SLAs, escalation paths, audit logs)</li>
<li>Experience working with sensitive data and understanding the privacy, access-control, and reviewer-wellbeing considerations that come with it</li>
<li>Experience with GCP/AWS, Postgres/BigQuery, and CI/CD in a production environment</li>
<li>Experience using LLMs as a building block inside operational tools (e.g. assisted triage, summarisation, or classification in the review loop)</li>
</ul>
<p>Representative projects:</p>
<ul>
<li>Rebuilding the analyst review queue so cases are routed by severity and skill, with full decision history and one-click escalation</li>
<li>Shipping a unified account-investigation view that pulls signals from multiple detection systems into a single, permissioned surface</li>
<li>Adding content-obfuscation and exposure-tracking features to protect reviewers working with harmful material</li>
<li>Building an internal labelling tool that feeds high-quality ground truth back to the detection and research teams</li>
</ul>
<p>Salary: £255,000 - £325,000 per year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£255,000 - £325,000 per year</Salaryrange>
      <Skills>software engineering, internal tools, operations platforms, back-office products, agentic coding tools, Claude Code, product-minded approach, communication, technical trade-offs, trust &amp; safety, content moderation, fraud, integrity, risk-operations, GCP/AWS, Postgres/BigQuery, CI/CD, LLMs, assisted triage, summarisation, classification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191433008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f4b9bbfb-a0a</externalid>
      <Title>Member of Technical Staff (Software Engineer, Computer Monetization)</Title>
      <Description><![CDATA[<p>As a monetization engineer at Perplexity, you&#39;ll own the abstractions that let product teams ship new SKUs, run pricing experiments, and convert users into paying customers, without needing to understand billing internals. The usage-based billing system this team owns is foundational to Perplexity Computer and our agentic products: metering every agent action in real time, enforcing credit budgets, and settling costs across entirely new interaction models.</p>
<p>You&#39;ll work across subscriptions, usage-based billing, and enterprise contracts to ensure every revenue path is reliable, observable, and easy to extend. This includes designing, building, and owning the billing platform and monetization systems that power Computer and every paid experience across Perplexity.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading features end-to-end, from problem definition through technical design, implementation, and launch</li>
<li>Building and operating the subscription, invoicing, and usage-based billing systems that serve millions of users across consumer and enterprise plans</li>
<li>Hill-climbing on reliability: tiered SLOs organized by customer impact, proactive monitoring, and alerting to catch revenue-affecting issues before customers do</li>
<li>Partnering closely with Finance, Data Science, Growth, Security, Support, and go-to-market teams to keep billing data accurate, auditable, and compliant, and to expand payment method coverage and optimize authorization rates globally</li>
<li>Building internal tooling that empowers Support and Finance to diagnose and resolve billing issues quickly</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>4+ years of professional software engineering experience</li>
<li>Direct experience building or scaling billing, payments, subscription, or monetization systems</li>
<li>Deep familiarity with payment processors (Stripe, Square, etc.), including subscriptions, invoicing, disputes, refunds, and webhooks</li>
<li>Strong backend engineering skills in Python with the ability to reason about complex distributed systems</li>
<li>Experience with relational databases (PostgreSQL) and making data-informed decisions to prioritize work</li>
<li>Strong product judgment, you translate user and stakeholder problems into simple, effective technical solutions</li>
<li>Self-motivated with strong ownership instincts, you ship major features ahead of schedule and drive improvements without asking for permission</li>
<li>Track record of cross-functional collaboration with Finance, Support, or Growth stakeholders</li>
<li>Genuine interest and adoption of AI products and willingness to learn quickly</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$220K – $405K</Salaryrange>
      <Skills>Python, Go, Stripe, PostgreSQL, AWS, Docker, Relational databases, Payment processors, Subscription management, Usage-based billing, Enterprise contracts, Stripe APIs and SDKs, Usage-based or metered billing models, Apple and Google Play in-app purchase billing, Growth experimentation and PLG/self-serve SaaS monetization, Platform constructs for pricing experiments and A/B testing, Full-stack experience, Regulatory or compliance-heavy billing environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.ai.png</Employerlogo>
      <Employerdescription>Perplexity is an AI company that offers a range of products and services, including Computer, a defining product for the new era of agentic AI.</Employerdescription>
      <Employerwebsite>https://perplexity.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/043d6a58-87a1-4e3c-bf47-4dc351b94cf4</Applyto>
      <Location>San Francisco; New York City</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7fbf551a-201</externalid>
      <Title>Backend Engineer - API</Title>
      <Description><![CDATA[<p>As a Backend Engineer - API at xAI, you will play a key role in building the xAI API that serves our models to developers worldwide. You will own the end-to-end system responsible for high-throughput inference, handling billions of tokens per minute with low latency and high availability, including model serving infrastructure, request routing, SDK development, rate limiting, observability, and efficient scaling.</p>
<p>You will have expert knowledge of either Rust or C++ and experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems. You will also have knowledge of service observability and reliability best practices, as well as experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB.</p>
<p>Preferred skills and experience include experience with LLM inference engines and serving frameworks, agent SDKs and agent orchestration frameworks, Docker, Kubernetes, and containerized applications, and expert knowledge of gRPC.</p>
<p>In addition to a competitive base salary of $180,000 - $440,000 USD, you will receive equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, C++, PostgreSQL, Clickhouse, MongoDB, gRPC, LLM inference engines, Serving frameworks, Agent SDKs, Agent orchestration frameworks, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated, with a focus on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5119111007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>354a19bc-9ec</externalid>
      <Title>Software Engineer (Backend), Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re not just building AI tools,we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, AI agent systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions. Their products provide high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4630032005</Applyto>
      <Location>Budapest, Hungary</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9c1d24ee-293</externalid>
      <Title>Senior Staff Product Engineer, Full Stack - ChatGPT Enterprise</Title>
      <Description><![CDATA[<p>We&#39;re hiring full stack engineers to build the product experiences that turn ChatGPT into an indispensable tool for teams,while ensuring the trust, controls, and performance enterprises require.</p>
<p>This role is especially focused on the user benefit &amp; activation track: designing collaboration experiences, onboarding and engagement loops, and product affordances that help employees discover and adopt advanced capabilities (including agents and custom GPT-style workflows).</p>
<p>In this role, you will:</p>
<ul>
<li>Build end-to-end features across frontend, backend-for-frontend, and service layers that improve how teams collaborate and share work in ChatGPT.</li>
</ul>
<ul>
<li>Design activation and engagement systems (e.g., product hooks, discovery surfaces, and admin-to-user rollout flows) that drive adoption across different user cohorts inside an enterprise.</li>
</ul>
<ul>
<li>Partner cross-functionally with engineers working on enterprise controls (security, permissions, compliance, residency) and company knowledge/search to ensure features ship with the right guardrails and enterprise readiness.</li>
</ul>
<ul>
<li>Collaborate with adjacent teams (including API/agent efforts) to integrate agent experiences into the ChatGPT interface in ways that feel native, safe, and scalable for business use.</li>
</ul>
<p>Your background might look something like:</p>
<ul>
<li>9+ years of professional engineering experience (excluding internships) in relevant roles at tech and product-driven companies</li>
</ul>
<ul>
<li>Former founder, or early engineer at a startup who has built a product from scratch is a plus</li>
</ul>
<ul>
<li>Proficiency with TypeScript, React, and other web technologies</li>
</ul>
<ul>
<li>Proficiency in one or more backend languages (e.g., Python, Go, Rust, Typescript or similar) and distributed systems concepts</li>
</ul>
<ul>
<li>Some experience with relational databases like Postgres/MySQL</li>
</ul>
<ul>
<li>Care deeply about reliability, safety, and performance in production environments.</li>
</ul>
<ul>
<li>Interest in AI/ML (direct experience not required)</li>
</ul>
<ul>
<li>Proven ability to thrive in fast-growing, product-driven companies by effectively navigating loosely defined tasks and managing competing priorities or deadlines.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>TypeScript, React, Python, Go, Rust, Postgres/MySQL, Distributed systems concepts, AI/ML, Reliability, Safety, Performance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2f8a9267-6fdf-4067-b162-d219b844268c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7e078ceb-e9a</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>
<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>
<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>
<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>
<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>
<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>
<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>
<li>Enhance and maintain the DevOps capabilities of the data platform.</li>
<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>
<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>
<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>
<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>
<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>
<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>
<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>
<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>
<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>
<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>
<li>4 years of professional experience in:</li>
<li>Data engineering, data product development and software product launches</li>
<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>
<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>
<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>
<li>Workflow orchestration tools like Airflow.</li>
<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>
<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>
<li>Microservices architecture to deliver large-scale real-time data processing application.</li>
<li>REST APIs for compute, storage, operations, and security.</li>
<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>
<li>Project management tools like Atlassian JIRA.</li>
</ul>
<p><strong>Even better if you have...</strong></p>
<ul>
<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>
<li>2 years of experience with ML Model Development and/or MLOps.</li>
<li>Committed code to improve open-source data/software engineering projects</li>
<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>
<li>GCP Professional Certifications.</li>
<li>Demonstrated passion to mine raw data and realize its hidden value.</li>
<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>
<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>
<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>
<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>
<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>
</ul>
<p>Experience Level: mid</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>This position is a range of salary grades 6-8.</Salaryrange>
      <Skills>Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is an American multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>ea4afeb3-e01</externalid>
      <Title>Vehicle Controls Engineering Process Tools Technical Specialist</Title>
      <Description><![CDATA[<p>In this role, you will co-lead the design, implementation, testing, documentation, and support of software changes to Ford-developed control systems tools under the direction of the Vehicle Controls Tools Technical Specialist.</p>
<p>Responsibilities include:</p>
<p>Design, develop, code, and test planned changes to software tools alongside the Vehicle Controls Tools Technical Specialist. Follow company policies and local change control process to bring approved changes to resolution. Provide user support across the globe using phone, email, Instant Messaging, and 1/1 interactions. Includes in-house tools for powertrain controls planning, software build and release, calibration management, and calibration release tools. Provide user account creation, maintenance, and access controls in user-interfacing tools. Co-lead conformance and upgrades of existing Powertrain Controls Servers and Company required software to Enterprise Technologies standards. Participate in the development and delivery of training material to world-wide Ford community. Partner with Enterprise Technology as architect/product owner to leverage additional resources. Document changes in the appropriate user manuals and other media. Participate in User Forum meetings.</p>
<p>Qualifications:</p>
<p>Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or a related technical field. In-depth experience with the C-language, in support of an embedded control systems environment. Must have in-depth and working experience with C language data structures, struct, union, pointers, bit manipulation techniques, file read/write, hash tables, recursion, and recursive decent parsing. In-depth experience developing applications in a Unix environment, including make-files, gdb, bash, editing, setuid, and process fork. In-Depth experience developing Graphical User Interfaces in a Unix environment, using X-windows and Motif. Working experience with Postgres database, table record creation, and SQL. In-depth experience with TCP/IP Sockets, creating Daemons, inter-process communication, semaphores, file locking. Working experience of Code Configuration Management, Jira Issue Management. Microsoft Word, Excel and Powerpoint. Excellent oral and written communication skills. Excellent organizational skills.</p>
<p>Preferred Qualifications:</p>
<p>Working knowledge of industry standard file formats including ASAM e.V A2L, Intel H32, Motorola S-records. Knowledge of GTK+, Wayland, Builder Xcessory Knowledge of control system software and calibration development including ATI/Vision, ETAS/Inca.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$132,800-$180,800</Salaryrange>
      <Skills>C-language, Unix environment, Postgres database, SQL, TCP/IP Sockets, Code Configuration Management, Jira Issue Management, Microsoft Word, Excel, Powerpoint, ASAM e.V A2L, Intel H32, Motorola S-records, GTK+, Wayland, Builder Xcessory, ATI/Vision, ETAS/Inca</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker headquartered in Dearborn, Michigan. It designs, manufactures, markets, and distributes automobiles and commercial vehicles worldwide.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/61409</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2ca84bea-b34</externalid>
      <Title>Senior Software Engineer, End User Protection (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our End User Protection team. As a member of this fast-paced, agile team, you will design and build features using technologies such as Node.js (JavaScript/Typescript), AWS, Azure, MongoDB, PostgreSQL, DynamoDB, and Kubernetes. You will lead the technical breakdown of complex requirements into clear, modular, and actionable engineering tasks, setting the standard for project clarity and velocity.</p>
<p>You will drive and own the engineering estimation process for medium to large-sized initiatives, effectively managing risk and communicating technical trade-offs, timelines, and dependencies to engineering and product leadership. You will act as a key technical collaborator and influencer with internal stakeholders (e.g., Product Management, Security, Infrastructure), proactively aligning technical roadmaps and advocating for architectural changes that support long-term product vision.</p>
<p>You will collaborate with industry-leading experts to implement the cutting-edge of Identity Protocols and Open Standards such as OpenID Connect, OAuth, and SAML. You will maintain and operate services at a large scale, participate in scheduled on-call rotations, and mentor junior and mid-level engineers, providing guidance on system design, code quality, testing practices, and career development.</p>
<p>To be successful in this role, you will need practical experience using Node.js (JavaScript or Typescript) or a similar language, experience working on systems that are highly reliable, maintainable, and scalable, and a thorough understanding of application security and cloud security best practices.</p>
<p>You will also need a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. A track record of influencing engineering strategy and driving complex, multi-quarter projects to completion across organisational boundaries is also essential.</p>
<p>Experience with cloud environments (AWS and Azure preferred) and the ability to communicate your ideas and collaborate with other team members effectively in a remote working environment are also required.</p>
<p>In addition, enthusiasm to work with and learn more about Identity Protocols such as OAuth, OIDC, and SAML is a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, AWS, Azure, MongoDB, PostgreSQL, DynamoDB, Kubernetes, OpenID Connect, OAuth, SAML, Identity Protocols, Open Standards, Cloud Security Best Practices, System Design, Code Quality, Testing Practices, Career Development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is an easy-to-implement authentication and authorization platform designed by developers for developers. It ensures access to applications is safe, secure, and seamless for the more than 100 million daily logins worldwide.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7834248</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b8361772-263</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>In this role, you will join the Ford Pro Intelligence (FPI) Telematics team as a Software Engineer. The team creates back-end services and APIs that help customers understand, manage, and control their fleets of vehicles via web, mobile, and API applications.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Participating in and/or leading the development of requirements, features, user stories, use cases, and test cases.</li>
<li>Authoring process, technical design, and support documents.</li>
<li>Collaborating with the broader FPI Telematics team on solution designs, development, and deployment.</li>
<li>Participating and/or leading incident, problem, change, and service request-related activities, including root cause analysis (RCA).</li>
</ul>
<p>You will work on delivering products that include Spring/Cloud services that support processing and storing telematics information while providing a secure set of APIs accessible to customers.</p>
<p>As a Software Engineer, you will have the opportunity to work on a wide range of projects and contribute to the growth and success of the FPI Telematics team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>A range of salary grades 6-8.</Salaryrange>
      <Skills>Java, Springboot, Kotlin, Node.js, GCP, AWS, Azure, serverless functions, databases, messaging queues, caching systems, relational databases, SQL like PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62744</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>cb1cb05e-5b7</externalid>
      <Title>Software Engineer, Scaled Abuse</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>The Applied team safely brings OpenAI&#39;s technology to the world. We released ChatGPT; Plugins; DALL·E; and the APIs for GPT-5, embeddings, and fine-tuning. We also operate inference infrastructure at scale. There&#39;s a lot more on the immediate horizon.</p>
<p>Our customers build fast-growing businesses around our APIs, which power product features that were never before possible. ChatGPT is a prime example of what is currently possible. We simultaneously ensure that our powerful tools are used responsibly. Safe deployment is more important to us than unfettered growth.</p>
<p>The Fraud Engineering team works within our Applied Engineering organization identifying and responding to fraudsters on our platform. We are looking for a software engineer with anti fraud &amp; abuse experience to help architect and build our next-generation anti-fraud systems.</p>
<p><strong>About the role</strong></p>
<p>The Scaled Abuse team protects OpenAI’s products and customers by detecting, preventing, and responding to fraudulent and abusive behavior at scale. We build and operate the backend and data systems that power real-time detection, investigation workflows, and enforcement , balancing strong protections with a great user experience as the platform grows.</p>
<p>Our work sits at the intersection of engineering and abuse expertise: we partner closely with Trust &amp; Safety, Security, and Product to understand emerging attack patterns, translate messy signals into clear system behavior, and continuously harden our defenses. The problems are dynamic and ambiguous by default, so we value engineers who can quickly dive into an unfamiliar codebase, develop strong intuition about how it works end-to-end, and propose pragmatic improvements that make the entire stack more resilient.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience</li>
</ul>
<ul>
<li>Work closely with finance, security, product, research, and trust &amp; safety operations to holistically combat fraudulent and abusive actors on our system</li>
</ul>
<ul>
<li>Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well-resourced adversaries</li>
</ul>
<ul>
<li>Utilize GPT-5 and future models to more effectively combat fraud and abuse</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have at least 5 years of software engineering experience in backend and data systems.</li>
</ul>
<ul>
<li>Have at least 2 years experience in fraud or abuse analysis, investigation, and/or operations</li>
</ul>
<ul>
<li>Can dive into our codebase, intuit how it works, and be able to have a strong intuition for suggestions that will lead us to a stronger engineering position.</li>
</ul>
<ul>
<li>A voracious and intrinsic desire to learn and fill in missing skills. An equally strong talent for sharing that information clearly and concisely with others</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapidly changing conditions. You view changes as an opportunity to add structure and order when necessary</li>
</ul>
<ul>
<li>Experience in Machine Learning techniques is a plus, but not required</li>
</ul>
<p><strong>Our tech stack</strong></p>
<p>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills and the ability to quickly pick up new tools and frameworks.</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</p>
<p>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</p>
<p>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</p>
<p>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-and</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>backend and data systems, fraud detection and remediation, machine learning techniques, Python, Postgres, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/69417f32-b564-471b-acdf-f0330bd7074e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bf7a45a5-8d0</externalid>
      <Title>Analytics Scientist</Title>
      <Description><![CDATA[<p>We are seeking a highly motivated and technically skilled professional to join the Analytics Solutions Integration team within Credit Analytics. This position is ideal for professionals excited about implementing predictive models for multiple business functions, modernizing legacy processes, and applying advanced technologies to transform analytics delivery.</p>
<p>As an Analytics Scientist, you will bridge the gap between analytical model development and production deployment, using SAS, Vendor tools on OnPrem and Cloud based applications. You will also be a key contributor in modernizing legacy mainframe-based batch testing processes through automation, dataset comparison frameworks, and summarization/reporting of the data.</p>
<p>Responsibilities:</p>
<p>Implement, validate, test, and Productionalize the predictive models and risk strategies across global platforms. Collaborate with Data Scientists, Business teams, and IT to ensure smooth transition of models from development to production. Design and implement automated pipelines that support batch testing workflows involving mainframe JCL, flat files, VSAM or legacy datasets. Develop reusable and repeatable automation for comparing and summarizing data between legacy and modernized systems for multiple business functions. Use GCP services such as BigQuery, PostgreSQL, Cloud Functions, Cloud Storage, Cloud Composer, Cloud Run, and Pub/Sub to build scalable workflows that support analytics delivery. Move mainframe outputs to cloud storage for processing and use SQL/Python/LLM-enhanced logic to analyze results. Identify opportunities to introduce automation, GenAI tooling, and workflow simplification and develop the Proof-of-Concepts to enhance delivery processes through automation. Provide data analysis, SQL/SAS/Python programming, and on-demand reporting aligned to business needs.</p>
<p>Qualifications:</p>
<p>Bachelor’s degree in computer science, Data Science, Information Systems, Engineering, or related field. 4–5 years of programming experience in object-oriented and procedural paradigms (Example: Java, SQL, and Python),  with experience preferably in Statistical Analysis System software (like&quot;SAS RealTime Decision Manager&quot;). Experience with Relational Database Management Systems (like DB2) 1–2 years hands-on experience with Google Cloud Platform (GCP) including BigQuery, PostgreSQL, Cloud Storage, Cloud Functions, etc. Familiarity with Waterfall, Agile, and PDO methodologies. Experience working with IT testing environments, regression testing, and automated validation. Strong understanding of modern automation frameworks and AI-powered tooling like Agentic AI.</p>
<p>Even better you’ll have:</p>
<p>Master’s degree. Experience integrating or automating processes involving mainframe legacy systems. Gen AI tooling - Agentic AI, Workflows &amp; Cloud integration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>SG5-SG8&quot;,   &quot;salaryMin&quot;: 95000,   &quot;salaryMax&quot;: 140000,   &quot;salaryCurrency&quot;: &quot;USD&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>SAS, Vendor tools, Google Cloud Platform, BigQuery, PostgreSQL, Cloud Functions, Cloud Storage, Cloud Composer, Cloud Run, Pub/Sub, SQL, Python, LLM-enhanced logic, Automation, GenAI tooling, Agentic AI, Waterfall, Agile, PDO methodologies, Relational Database Management Systems, DB2</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Credit Company</Employername>
      <Employerlogo>https://logos.yubhub.co/fordcredit.com.png</Employerlogo>
      <Employerdescription>Ford Motor Credit Company provides financing and personalized service to thousands of dealers and millions of customers worldwide.</Employerdescription>
      <Employerwebsite>https://www.fordcredit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/61844</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dfe3062f-992</externalid>
      <Title>Staff Backend Software Engineer — Privileged Access Management (PAM)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Backend Software Engineer , Okta Privileged Access Management (PAM)</p>
<p>Ever wonder how large organisations make sure the right people can access their most critical systems? That&#39;s the problem we solve. Our team builds the infrastructure that controls who can reach sensitive servers, databases, and cloud resources, grants access only when it&#39;s needed. It is the security layer between people (and non-human-interfaces) and the systems they need to do their jobs.</p>
<p>We&#39;re looking for a Backend Software Engineer who wants to work on hard problems: distributed systems and building software where getting it right really matters. You&#39;ll ship code that protects real infrastructure for real organisations. You’ll build foundations that multiple feature teams depend on. When you make something faster, more reliable, or easier to use, it multiplies across the entire product.</p>
<p>This is a role for someone who likes thinking about how systems fit together. You&#39;ll need strong opinions about what makes a good abstraction, and the flexibility to evolve those abstractions as the product grows.</p>
<p>The Okta Privileged Access Management (PAM) is an identity-centric approach to a common and critical privileged access use case. Our elegant Zero Trust architecture is purpose-built for the modern cloud and helps customers solve challenging security and operations pain points at scale.</p>
<p>We are looking for a Backend Software Engineer to join our fast-growing team with a focus on scalability, reliability, and enhancing the building blocks of the product. In this role you will:</p>
<ul>
<li>Be deeply involved in evolving the core architecture of PAM.</li>
</ul>
<ul>
<li>Work in our product development teams to build scalable, composable components of our platform.</li>
</ul>
<ul>
<li>Be responsible for designing and implementing scalable architecture patterns.</li>
</ul>
<ul>
<li>Design and build APIs with OpenAPI Specification that customers rely on for access to production infrastructure.</li>
</ul>
<ul>
<li>Work on backend systems written in Go</li>
</ul>
<ul>
<li>Participate in the rotational on-call activities with SRE and product development teams.</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Are an experienced software engineer with a background in Golang (other languages are also acceptable)</li>
</ul>
<ul>
<li>Experienced working with relational databases like PostgreSQL or similar RDBMS technologies.</li>
</ul>
<ul>
<li>Have the ability to design database models and backend APIs.</li>
</ul>
<ul>
<li>Have experience working with cloud services like Caching, Queues, NoSQL Databases etc.</li>
</ul>
<ul>
<li>Experienced working with any cloud provider such as AWS, GCP or Azure.</li>
</ul>
<ul>
<li>Thrive in a collaborative environment built on end-to-end ownership.</li>
</ul>
<ul>
<li>Love thinking about distributed systems, and the reliability, availability, and performance implications of the decisions made in their design.</li>
</ul>
<ul>
<li>Enjoy deep-diving into production metrics, and familiarity with monitoring tools like Splunk, DataDog etc.</li>
</ul>
<ul>
<li>Think in terms of systems, services, and APIs.</li>
</ul>
<p>Required education and experience:</p>
<ul>
<li>8+ years working as a software engineer.</li>
</ul>
<ul>
<li>Experience working with production systems.</li>
</ul>
<ul>
<li>Bachelors in CS, or equivalent</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000-$200,000 CAD</Salaryrange>
      <Skills>Golang, PostgreSQL, database models, backend APIs, cloud services, caching, queues, NoSQL Databases, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. The company was founded in 2009 and has grown to become a leading player in the industry.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7826456</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>653bca90-18d</externalid>
      <Title>Engineering Manager, Organizations (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for an experienced Engineering Manager to lead our Organizations team. As an Engineering Manager, you will be responsible for managing a team of 9 remote engineers, mentoring and coaching them to achieve their goals. You will work closely with the Product Manager to plan and deliver the team&#39;s quarterly and annual roadmap. You will also be responsible for owning and being accountable for the quality of the team&#39;s technical estate, effectively managing technical debt, addressing security vulnerabilities, and ensuring wider cross-team technical initiatives are delivered in a timely manner.</p>
<p>The ideal candidate will have experience growing engineers to the next level, bringing off-track engineers back on track, and working on projects that require close collaboration with external teams. They will also have solid architectural knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</p>
<p>In particular, you will be able to spot areas where scalability and performance might be affected. You will know how to track and steer a project to successful and timely delivery. Experience in authentication protocols such as OAuth2, OIDC, SAML, and understanding of event-driven architectures, especially Apache Kafka, is a plus.</p>
<p>As an Engineering Manager at Okta, you will have the opportunity to work on a wide range of challenging projects, collaborate with a talented team of engineers, and contribute to the growth and success of the company.</p>
<p>If you are a motivated and experienced engineer looking for a new challenge, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$168,000-$231,000 CAD</Salaryrange>
      <Skills>NodeJS, JavaScript, TypeScript, PostgreSQL, AWS, Azure, Containers, Authentication protocols, Event-driven architectures, OAuth2, OIDC, SAML, Apache Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7843717</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>93447a87-531</externalid>
      <Title>Full Stack Software Engineer, OpenAI Edu</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$230K – $385K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>OpenAI’s Education team is building products and experiences that help learners, educators, and institutions benefit from AI in ways that are rigorous, useful, and grounded in real learning outcomes.</p>
<p><strong>About the role</strong></p>
<p>We’re looking for a product-minded Full Stack Engineer to help build OpenAI’s education products from the ground up. You’ll own end-to-end development across the stack, from early concepting and prototyping through production launch and iteration.</p>
<p>This is an opportunity to work on a highly strategic, early-stage product area where engineering judgment, product sense, and customer empathy all matter.</p>
<p>You’ll partner closely with leaders across the education org, including learning scientists, researchers, designers, and cross-functional partners, to turn emerging ideas into durable product experiences for schools, universities, and other education stakeholders.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and ship product experiences across the full stack for OpenAI’s education offerings</li>
</ul>
<ul>
<li>Own projects end-to-end, from ideation and technical design through implementation, launch, and iteration</li>
</ul>
<ul>
<li>Work closely with learning scientists and researchers to translate learning goals and evidence into product decisions</li>
</ul>
<ul>
<li>Collaborate with design, data, and cross-functional partners to build thoughtful, high-quality user experiences</li>
</ul>
<ul>
<li>Help define the engineering foundation for a growing education pod, including patterns, systems, and technical direction</li>
</ul>
<ul>
<li>Contribute to products that may inform both consumer and B2B education experiences over time</li>
</ul>
<p><strong>Your background might look something like:</strong></p>
<ul>
<li>5+ years of professional engineering experience (excluding internships) in relevant roles at tech and product-driven companies</li>
</ul>
<ul>
<li>Former founder, or early engineer at a startup who has built a product from scratch is a plus</li>
</ul>
<ul>
<li>Proficiency with TypeScript, React, and other web technologies</li>
</ul>
<ul>
<li>Proficiency in one or more backend languages (e.g., Python, Go, Rust, Typescript or similar) and distributed systems concepts</li>
</ul>
<ul>
<li>Some experience with relational databases like Postgres/MySQL</li>
</ul>
<ul>
<li>Care deeply about reliability, safety, and performance in production environments.</li>
</ul>
<ul>
<li>Interest in AI/ML (direct experience not required)</li>
</ul>
<ul>
<li>Proven ability to thrive in fast-growing, product-driven companies by effectively navigating loosely defined tasks and managing competing priorities or deadlines.</li>
</ul>
<ul>
<li>Have experience building interactive educational tools or learning products, or are excited to apply your skills in this space</li>
</ul>
<ul>
<li>Have a strong interest in improving access to education and expanding opportunities for learners worldwide</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>TypeScript, React, Python, Go, Rust, Postgres/MySQL, Distributed systems concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/9b1b62f5-1400-4672-910a-fda6f975f642</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>fd6b5411-cdc</externalid>
      <Title>Software Engineer, Engineering Acceleration \| Consumer Devices</Title>
      <Description><![CDATA[<p>Job Title: Software Engineer, Engineering Acceleration \| Consumer Devices</p>
<p><strong>About the Role</strong> The Engineering Acceleration-focused engineer on the Consumer Device Infrastructure team builds the CI/CD systems, developer workflows, and internal platform capabilities that help engineers develop, test, ship, and debug software across device and cloud surfaces.</p>
<p>This is a highly hands-on senior engineering role focused on CI/CD, software build and deployment pipelines, and developer productivity. You will design and build the technical foundations that improve engineering velocity, reduce toil, and increase software quality. You will also make pragmatic architecture and platform decisions based on the organization’s stage, scaling needs, and security requirements.</p>
<p>We’re looking for an engineer with deep experience in developer productivity and CI/CD who enjoys building robust internal platforms, improving day-to-day engineering workflows, and creating secure, reliable systems that other engineers depend on.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and operate CI/CD systems and software delivery pipelines for software that runs both on device and in the cloud.</li>
<li>Lead the design and architecture of internal platform capabilities across build, test, deployment, workflow automation, and developer tooling.</li>
<li>Make hands-on technical decisions about platform design, abstraction boundaries, and system tradeoffs based on the team’s current stage, scale, and operational needs.</li>
<li>Improve developer productivity by shortening feedback loops across build, test, debugging, environment setup, and release-adjacent workflows.</li>
<li>Build self-serve paved-road workflows that reduce manual effort and make common engineering tasks fast, reliable, and easy to adopt.</li>
<li>Use AI-native tooling and automation to improve engineering workflows, failure triage, and developer output quality.</li>
<li>Build secure-by-default platform capabilities, including access controls, secrets and credential handling, artifact permissions, auditability, and policy enforcement in software delivery workflows.</li>
<li>Partner closely with product, systems, release, quality, and infrastructure teams to understand pain points and turn them into durable platform improvements.</li>
<li>Define and track metrics for platform health and engineering effectiveness, including build times, queue times, failure rates, flaky failures, and deployment lead time.</li>
<li>Guide engineering teams on pragmatic observability, reliability, and scalability choices for the systems they build.</li>
<li>Participate in the on-call rotation for the systems owned by the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of software engineering experience, including 5+ years building developer productivity, CI/CD, internal platform, or engineering systems.</li>
<li>Deep experience designing and operating robust CI/CD pipelines, build systems, and software delivery infrastructure for complex products.</li>
<li>Strong track record of personally building platform features and workflow improvements that materially increased engineering velocity, reliability, or developer experience.</li>
<li>Experience making architectural decisions for internal platforms, including when to standardize, when to abstract, and when to keep systems simple.</li>
<li>Experience adapting technical decisions to the maturity and scaling stage of an organization, balancing speed, reliability, maintainability, and adoption.</li>
<li>Working knowledge of secure software delivery practices such as least-privilege access, secrets management, policy enforcement, auditability, or software supply chain hardening.</li>
<li>Strong empathy for the tools, workflows, and frustrations that create toil or slow engineering teams down.</li>
<li>Comfortable operating in ambiguous, fast-changing environments and bringing structure where needed.</li>
<li>Communicate clearly and work effectively across teams with different priorities, constraints, and technical needs.</li>
</ul>
<p><strong>Technical Context</strong></p>
<p>At the heart of our infrastructure is a large-scale deployment of CPU/GPU nodes running in Kubernetes clusters across regions. We build secure systems that support software running on device and in the cloud. Some core technologies we work with include Terraform, Buildkite, Bazel, Postgres, Cosmos DB, Kafka, and more.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
<li>401(k) retirement plan with employer match</li>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
<li>Mental health and wellness support</li>
<li>Employer-paid basic life and disability coverage</li>
<li>Annual learning and development stipend to fuel your professional growth</li>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
<li>Relocation support for eligible employees</li>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Salary Min</strong>: 230000 <strong>Salary Max</strong>: 342000 <strong>Salary Currency</strong>: USD <strong>Salary Period</strong>: Year <strong>Required Skills</strong>: Terraform, Buildkite, Bazel, Postgres, Cosmos DB, Kafka <strong>Preferred Skills</strong>: None</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $342K</Salaryrange>
      <Skills>Terraform, Buildkite, Bazel, Postgres, Cosmos DB, Kafka</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/ae3a32af-b862-45db-838c-7fb49d4bc27e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8c8d158d-9d6</externalid>
      <Title>Senior Staff Backend Software Engineer, ChatGPT Enterprise</Title>
      <Description><![CDATA[<p>We&#39;re hiring backend engineers to build the product experiences that turn ChatGPT into an indispensable tool for teams,while ensuring the trust, controls, and performance enterprises require. This role is especially focused on the enterprise controls track: designing collaboration experiences, onboarding and engagement loops, and product affordances that help employees discover and adopt advanced capabilities (including agents and custom GPT-style workflows).</p>
<p>In this role, you will:</p>
<ul>
<li>Build backend systems that power enterprise controls, including permissions, policy enforcement, auditability, and compliance workflows.</li>
</ul>
<ul>
<li>Design residency- and region-aware architectures that enable enterprise workloads to meet country- and customer-specific requirements.</li>
</ul>
<ul>
<li>Partner with product, security, legal/compliance, and adjacent engineering teams to turn enterprise requirements into scalable technical systems.</li>
</ul>
<ul>
<li>Improve the reliability, observability, and operational maturity of the services that underpin ChatGPT Enterprise.</li>
</ul>
<p>Writable about:</p>
<ul>
<li>9+ years of professional engineering experience (excluding internships) in relevant roles at tech and product-driven companies</li>
</ul>
<ul>
<li>Former founder, or early engineer at a startup who has built a product from scratch is a plus</li>
</ul>
<ul>
<li>Proficiency in one or more backend languages (e.g., Python, Go, Rust, Typescript or similar) and distributed systems concepts</li>
</ul>
<ul>
<li>Experience designing and operating distributed systems with a strong emphasis on reliability, performance, and security</li>
</ul>
<ul>
<li>Experience building systems involving access controls, identity, compliance, privacy, data governance, or other enterprise platform concerns</li>
</ul>
<ul>
<li>Some experience with relational databases like Postgres/MySQL</li>
</ul>
<ul>
<li>Care deeply about reliability, safety, and performance in production environments</li>
</ul>
<ul>
<li>Interest in AI/ML (direct experience not required)</li>
</ul>
<ul>
<li>Proven ability to thrive in fast-growing, product-driven companies by effectively navigating loosely defined tasks and managing competing priorities or deadlines.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>backend languages, distributed systems, access controls, identity, compliance, privacy, data governance, relational databases, Python, Go, Rust, Typescript, Postgres/MySQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a5591da5-23f3-4926-ac8d-8d1c927e3004</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2dd530a1-6a3</externalid>
      <Title>Staff Backend Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The PAM Team</p>
<p>Ever wonder how large organisations make sure the right people can access their most critical systems? That&#39;s the problem the Okta Privileged Access Management (PAM) team solves. Our solution controls who can reach sensitive servers, databases and cloud resources and grants access only when it&#39;s needed. It is the security layer between people (and non-human-identities) and the systems they need to do their jobs.</p>
<p>The Staff Backend Engineer Opportunity</p>
<p>We&#39;re looking for a Backend Software Engineer who wants to work on hard problems: distributed systems and building software where getting it right really matters. You&#39;ll ship code that protects real infrastructure for real organisations. You’ll build foundations that multiple feature teams depend on. When you make something faster, more reliable, or easier to use, it multiplies across the entire product.</p>
<p>This is a role for someone who likes thinking about how systems fit together. You&#39;ll need strong opinions about what makes a good abstraction, and the flexibility to evolve those abstractions as the product grows.</p>
<p>What you’ll be doing</p>
<p>Be deeply involved in evolving the core architecture of PAM.</p>
<p>Work in our product development teams to build scalable, composable components of our platform.</p>
<p>Be responsible for designing and implementing scalable architecture patterns.</p>
<p>Design and build APIs with OpenAPI Specification that customers rely on for access to production infrastructure.</p>
<p>Work on backend systems written in Go</p>
<p>Participate in the rotational on-call activities with SRE and product development teams</p>
<p>What you’ll bring to the role</p>
<p>8+ years of experience as a SWE</p>
<p>Are an experienced software engineer with a background in Golang (other languages are also acceptable)</p>
<p>Experienced working with relational databases like PostgreSQL or similar RDBMS technologies.</p>
<p>Have the ability to design database models and backend APIs.</p>
<p>Have experience working with cloud services like Caching, Queues, NoSQL Databases etc.</p>
<p>Experienced working with any cloud provider such as AWS, GCP or Azure.</p>
<p>Thrive in a collaborative environment built on end-to-end ownership.</p>
<p>Love thinking about distributed systems, and the reliability, availability, and performance implications of the decisions made in their design.</p>
<p>Enjoy deep-diving into production metrics, and familiarity with monitoring tools like Splunk, DataDog etc.</p>
<p>Think in terms of systems, services, and APIs</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Golang, PostgreSQL, Cloud services, Caching, Queues, NoSQL Databases, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7819478</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>486f5044-c48</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Software Engineer on our Platform team to own and scale the systems that route and serve millions of LLM requests every day. The business is growing at an unbelievable pace and we need help to ensure our platform can keep up.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own and evolve our edge and cloud infrastructure across Cloudflare, Google Cloud, and Vercel.</li>
<li>Scale and operate our data layer including Spanner, ClickHouse, and Postgres.</li>
<li>Ensure we are optimizing for performance when serving LLM inference as traffic rapidly grows.</li>
<li>Partner with engineering leadership on capacity, reliability, and cost across the routing layer, with ownership of the systems carrying production traffic.</li>
<li>Set the bar and playbook for how we run infrastructure and operations as the team grows , tooling, observability, on-call, and the patterns other engineers build against.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years building and operating production infrastructure at companies where uptime, latency, and cost matter.</li>
<li>Proven experience with cloud platforms (GCP, AWS, Azure) and edge-first serverless platforms (e.g. Cloudflare Workers)</li>
<li>Deep expertise in operating large scale databases (e.g Postgres, Spanner, etc).</li>
<li>A full-stack TypeScript shop won&#39;t faze you; you can move across the stack when the platform needs it.</li>
<li>High agency and a bias toward action. You don&#39;t wait for tickets , you see the bottleneck and fix it.</li>
<li>AI-forward in your workflow. You use coding agents, MCPs, and LLMs heavily and have opinions about what works.</li>
<li>Pragmatic about tradeoffs between speed and simplicity.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Existing user of OpenRouter, or active side projects in AI products/infrastructure or developer tooling.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$215,000 to $285,000 plus benefits &amp; equity</Salaryrange>
      <Skills>Cloudflare, Google Cloud, Vercel, Spanner, ClickHouse, Postgres, TypeScript, GCP, AWS, Azure, Cloudflare Workers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenRouter</Employername>
      <Employerlogo>https://logos.yubhub.co/openrouter.com.png</Employerlogo>
      <Employerdescription>OpenRouter is the leading AI routing and infrastructure layer that enterprises use to access, manage, and optimize large language models across providers. It powers the most advanced AI teams in the world.</Employerdescription>
      <Employerwebsite>https://openrouter.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openrouter/47c2bcd2-f71c-47a6-831f-a4130d607a7b</Applyto>
      <Location>Remote (US)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b490d457-f6f</externalid>
      <Title>Product Growth Engineer</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$120K – $220K • 0.01% – 0.1%</p>
<p><strong>Senior Product Growth Engineer</strong></p>
<p>Firecrawl&#39;s Product Growth team runs like an engineering org. Every initiative we take on (activation flows, conversion landing pages, in-product surfaces, internal tooling) is shipped as real software by the people on this team. The backlog is long and most of it is bottlenecked on engineering capacity, not ideas.</p>
<p>We need a senior full-stack engineer who can take on growth projects end-to-end: scope the work, build the frontend and backend, ship it to production, and iterate. Someone who can work across the stack at speed and own ambitious projects without needing a PM to break them down. This is a high-output IC role. You&#39;ll work directly with the Head of Product Growth on priorities, build alongside the team&#39;s data and growth engineers, and have direct access to core engineering for anything that needs coordination.</p>
<p>Scope note: top-of-funnel work (brand, SEO content, broad marketing site) lives with the Marketing team. Your work lives closer to conversion: the pages, flows, and in-product surfaces where interested users turn into active, paying customers.</p>
<p><strong>Salary Range:</strong></p>
<p>120,000 to 200,000/year OTE (Range shown is for U.S.-based employees in San Francisco, CA. Compensation outside the U.S. is adjusted fairly based on your country&#39;s cost of living. You can explore how we calculate this here: https://www.firecrawl.dev/careers/compensation)</p>
<p><strong>Equity Range:</strong></p>
<p>0.01 to 0.10%</p>
<p><strong>Job Type:</strong></p>
<p>Full-Time (SF) or Contract (Remote)</p>
<p><strong>Experience:</strong></p>
<p>5+ years</p>
<p><strong>Visa:</strong></p>
<p>US Citizenship/Visa required for SF; N/A for Remote</p>
<p><strong>About Firecrawl</strong></p>
<p>Firecrawl is the easiest way to extract data from the web. Developers use us to reliably convert URLs into LLM-ready markdown or structured data with a single API call. In just a year, we&#39;ve hit 8 figures in ARR and 90k+ GitHub stars by building the fastest way for developers to get LLM-ready data.</p>
<p>We&#39;re a small, fast-moving, technical team building essential infrastructure super-intelligence will use to gather data on the web. We ship fast and deep.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li><strong>Own growth projects end-to-end:</strong> Scope the work, write the frontend and backend, ship to production, and iterate based on what the data says. Move in days and weeks, not quarters.</li>
</ul>
<ul>
<li><strong>Build in-product growth features:</strong> Ship the onboarding, activation, retention, and expansion surfaces that turn signups into paying customers: usage dashboards, contextual upgrade prompts, feature discovery, guidance states.</li>
</ul>
<ul>
<li><strong>Improve the playground:</strong> Our highest-leverage conversion surface. Make it faster, smarter, and easier for a developer to go from &quot;trying it&quot; to &quot;using it.&quot;</li>
</ul>
<ul>
<li><strong>Ship conversion landing pages:</strong> Own the pages closest to conversion: partner integrations, competitive comparisons, use cases, campaign pages. Full stack: component, copy scaffolding, data layer, and the API endpoints behind anything interactive.</li>
</ul>
<ul>
<li><strong>Build internal tooling:</strong> Ship the UIs the Product Growth team uses to operate: customer dashboards, outreach interfaces, manifest views, triage tools. Turn repeatable manual work into software.</li>
</ul>
<ul>
<li><strong>Run experiments and measure:</strong> Instrument what you build. Run real A/B tests. Know whether the thing worked before shipping the next thing.</li>
</ul>
<ul>
<li><strong>Improve developer experience where it touches growth:</strong> SDK ergonomics, sample code, starter templates, first-run experiences. The surfaces where activation lives or dies.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<p><strong>Strong full-stack engineer.</strong> You ship production code across React, TypeScript, Next.js, and Node. You&#39;re comfortable in Python and SQL too. You&#39;ve owned real features end-to-end, not just frontends or just APIs.</p>
<p><strong>Product-minded with taste.</strong> You can look at a flow or a page and tell what&#39;s broken before the data does. You care about the details, and you can shape what you&#39;re building without needing a spec handed to you.</p>
<p><strong>Growth-oriented.</strong> You think in funnels, activation curves, and conversion rates. You want to know how your work moved the number, not just whether it shipped. You instrument everything.</p>
<p><strong>AI-native.</strong> You already use AI tools daily as core work infrastructure. You&#39;ve pushed Claude, Copilot, or similar tools far enough to know where they help and where they don&#39;t. You use them to ship more, faster.</p>
<p><strong>Fast and scrappy.</strong> You ship working versions, not perfect plans. You know when a one-off script is better than a framework and when a quick fix is better than an abstraction. You&#39;d rather ship four experiments this week than one polished feature next month.</p>
<p><strong>Clear communicator.</strong> You can explain what you built and why to non-technical teammates. You write good PR descriptions and document what&#39;s worth documenting.</p>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve shipped in-product growth features at a developer-tools or SaaS company: onboarding, activation, upgrade surfaces, in-product guidance.</li>
</ul>
<ul>
<li>You&#39;ve built conversion landing pages that measurably moved signup or activation, not just static pages that shipped.</li>
</ul>
<ul>
<li>You&#39;ve built internal tools that the team you worked with actually used every day.</li>
</ul>
<ul>
<li>You&#39;ve run real A/B tests and can talk about what you learned, not just what you shipped.</li>
</ul>
<ul>
<li>Experience with our stack: Next.js, React, Tailwind, TypeScript, Vercel, PostgreSQL, Anthropic Claude API.</li>
</ul>
<ul>
<li>You&#39;ve built with LLMs in production: prompt engineering, tool use, inference pipelines.</li>
</ul>
<ul>
<li>You&#39;ve worked on developer-facing products: SDKs, playgrounds, docs surfaces, APIs.</li>
</ul>
<ul>
<li>You know what &quot;scaling chaos&quot; feels like at a company doing 5M-50M ARR.</li>
</ul>
<p><strong>What it Means to Join Firecrawl</strong></p>
<ul>
<li><strong>Ship the Growth Number:</strong> The surfaces you build are how Firecrawl activates, expands, and retains customers. Every feature is a lever, and the impact is visible.</li>
</ul>
<ul>
<li><strong>High Leverage:</strong> One well-built flow can move activation by double-digit percentages. Your work is measurable and shipped to every user.</li>
</ul>
<ul>
<li><strong>Autonomy:</strong> Own your work. We care about outcomes, not hours. Ship what matters, skip what doesn&#39;t.</li>
</ul>
<ul>
<li><strong>Growth Path:</strong> Start as the senior IC. As the team grows, you&#39;ll have the option to lead engineers or go deeper technically.</li>
</ul>
<ul>
<li><strong>Remote-First Culture:</strong> Collaborate from anywhere, or work out of our SF HQ.</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<p><strong>Available to all employees</strong></p>
<ul>
<li>Generous PTO: 15 days mandatory, anything after 24 days, just ask (holidays excluded); take the time you need to recharge</li>
</ul>
<ul>
<li>Parental leave: 12 weeks fully paid, for moms and dads</li>
</ul>
<ul>
<li>Wellness stipend: 100/month for the gym, therapy, massages, or whatever keeps you human</li>
</ul>
<ul>
<li>Learning &amp; Development: Expense up to 150/year toward anything that helps you grow professionally</li>
</ul>
<ul>
<li>Team offsites: A change of scenery, minus the trust falls</li>
</ul>
<ul>
<li>Sabbatical: 3 paid months off after 4 years, do something fun and new</li>
</ul>
<p><strong>Available to US-based full-time employees</strong></p>
<ul>
<li>Full coverage, no red tape: Medical, dental, and vision (100% for employees, 50% for spouse/kids), no weird loopholes, just care that works</li>
</ul>
<ul>
<li>Life &amp; Disability insurance: Employer-paid short-term disability, long-term disability, and life insurance</li>
</ul>
<ul>
<li>Supplemental options: Optional accident, critical illness, hospital indemnity, and voluntary life insurance for extra peace of mind</li>
</ul>
<ul>
<li>Doctegrity telehealth: Talk to a doctor from your couch</li>
</ul>
<ul>
<li>401(k) plan: Retirement might be a ways off, but future-you will thank you</li>
</ul>
<ul>
<li>Pre-tax benefits: Access to FSAs and commuter benefits (US-only) to help your wallet out a bit</li>
</ul>
<ul>
<li>Pet insurance: Because fur babies are family too</li>
</ul>
<p><strong>Available to SF-based employees</strong></p>
<ul>
<li>SF HQ perks: Snacks, drinks, team lunches</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 to $200,000/year OTE&quot;,   &quot;salaryMin&quot;: 120000,   &quot;salaryMax&quot;: 200000,   &quot;salaryCurrency&quot;: &quot;USD&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>React, TypeScript, Next.js, Node, Python, SQL, LLM, Claude, Copilot, SDKs, playgrounds, docs surfaces, APIs, Tailwind, Vercel, PostgreSQL, Anthropic Claude API</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Firecrawl</Employername>
      <Employerlogo>https://logos.yubhub.co/firecrawl.dev.png</Employerlogo>
      <Employerdescription>Firecrawl is a developer-tools company that provides a platform for extracting data from the web. They have hit 8 figures in ARR and 90k+ GitHub stars in just a year.</Employerdescription>
      <Employerwebsite>https://www.firecrawl.dev</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/firecrawl/fe6ab2c9-0528-4751-a6dd-67467e90fc0e</Applyto>
      <Location>Remote (Americas, UTC-3 to UTC-10)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c0c30c21-9ae</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>You&#39;ll own Gamma&#39;s data infrastructure and architecture as we scale to hundreds of millions of users and petabytes of data. This means defining the technical strategy for our end-to-end event pipeline architecture, designing distributed systems that handle massive scale with reliability, and establishing the foundation for how data flows through Gamma.</p>
<p>As a Staff Data Engineer, you&#39;ll balance hands-on engineering with technical leadership. You&#39;ll architect solutions for orders of magnitude growth, mentor engineers across the organization, and drive strategic decisions about our data stack. You&#39;ll work closely with analytics, product, and engineering leadership to enable data-driven decision making at scale while building systems that serve millions of users and inform critical business decisions.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own and evolve our end-to-end event pipeline architecture, from Kafka ingestion through Snowflake analytics, setting technical direction for data infrastructure</li>
<li>Design and architect distributed data systems that scale to orders of magnitude more data volume while maintaining world-class query performance</li>
<li>Lead initiatives to build and optimize CDC (change data capture) pipelines and streaming data transformations at massive scale</li>
<li>Establish best practices for data quality, pipeline reliability, and system observability across the organization</li>
<li>Drive strategic technical decisions about data modeling, infrastructure architecture, and technology choices</li>
<li>Mentor engineers and elevate data engineering practices across analytics, product, and engineering teams</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years as a data or software engineer with deep expertise in distributed systems, data infrastructure, and high-growth SaaS products at massive scale</li>
<li>Expert-level knowledge of Apache Kafka (producers, consumers, Kafka Connect, stream processing) and event streaming platforms</li>
<li>Extensive hands-on experience with Snowflake, including performance optimization, cost management, and data modeling; strong foundation in Postgres, CDC patterns, and replication strategies</li>
<li>Proven track record architecting and leading major data infrastructure initiatives through orders-of-magnitude growth</li>
<li>Experience establishing best practices and driving technical strategy across organizations</li>
<li>Strong communication skills with a history of influencing technical direction across engineering, analytics, and leadership</li>
<li>Proficiency with dbt, Terraform, and working knowledge of data governance, privacy compliance (GDPR, CCPA), and security best practices</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K - $310K</Salaryrange>
      <Skills>Apache Kafka, Snowflake, Postgres, dbt, Terraform, data governance, privacy compliance, security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a technology company that provides data infrastructure and architecture for hundreds of millions of users and petabytes of data.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/4b2c97d1-b12b-46b7-9e24-1fcd248e28a3</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4a46f923-4ec</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>You&#39;ll build and scale the application and data infrastructure that supports 70M+ users creating millions of gammas every day. This means working on real-time collaborative editing, databases, public APIs, and high-volume event pipelines while helping define and evolve the core data model and storage systems powering Gamma&#39;s business. You&#39;ll ship backend systems that directly impact growth metrics and user experience, balancing long-term technical investments with rapid shipping velocity.</p>
<p>As Software Engineer on the Platform team, you&#39;ll collaborate across frontend, product, and data teams in a fast-paced, product-led environment. You&#39;ll bring a product-minded approach, understanding how technical decisions impact user experience and business metrics while thriving in an environment where shipping quality directly impacts growth.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p>Design and implement scalable APIs, distributed systems, and data infrastructure that serve millions of users Help define and evolve the core data model and storage systems powering Gamma&#39;s business Ship backend systems that directly impact growth metrics and user experience Work on real-time collaborative editing, databases, public APIs, and high-volume event pipelines Balance long-term technical investments with rapid shipping velocity Collaborate across frontend, product, and data teams to deliver high-quality solutions under tight timelines</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180K - $275K plus benefits &amp; equity</Salaryrange>
      <Skills>backend technologies (Node.js, Python, or similar), databases (PostgreSQL, Redis), high-traffic production systems, performance optimization, real-time collaboration systems, event pipelines, AI-powered applications, product-minded approach, understanding of how technical decisions impact user experience and business metrics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma creates gammas for 70M+ users daily, supporting real-time collaborative editing, databases, public APIs, and high-volume event pipelines.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/7eba5a48-18d7-42d5-801f-7ba2522e6bc9</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3513ac8f-9c4</externalid>
      <Title>Staff Software Engineer, PostgreSQL</Title>
      <Description><![CDATA[<p>You&#39;ll own Gamma&#39;s PostgreSQL infrastructure as we scale from 70 million users to hundreds of millions, and from terabytes of data to hundreds of terabytes. Your job is to make sure our database can handle orders of magnitude more usage without compromising performance.</p>
<p>This is a deeply technical, hands-on role. You&#39;ll read and write code daily, dig into low-level systems, debug complex issues across massive datasets, and work on both core database scaling projects and application features. You&#39;ll collaborate closely with backend engineers, data engineers, and infrastructure teams to ensure our database architecture keeps pace with Gamma&#39;s growth.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and implement solutions for horizontally scaling PostgreSQL to hundreds of millions of users and hundreds of terabytes of data</li>
</ul>
<ul>
<li>Own database performance, availability, and reliability as usage grows by orders of magnitude</li>
</ul>
<ul>
<li>Debug complex issues across very large datasets and optimize query performance at scale</li>
</ul>
<ul>
<li>Establish best practices for database design, query optimization, and data modeling across engineering</li>
</ul>
<ul>
<li>Work across core infrastructure and application features that depend on database architecture</li>
</ul>
<ul>
<li>Collaborate with backend, data, and infrastructure engineers to align database strategy with product needs</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience with deep expertise in large-scale relational database systems, including hands-on experience managing hundreds of terabytes of data in production</li>
</ul>
<ul>
<li>Expert-level understanding of PostgreSQL (or comparable relational databases), horizontal scaling techniques such as sharding and partitioning, and complex query tuning</li>
</ul>
<ul>
<li>Strong programming skills in at least one backend language, with experience writing and maintaining highly available web APIs</li>
</ul>
<ul>
<li>Experience with large-scale event streaming systems, preferably Apache Kafka</li>
</ul>
<ul>
<li>Ability to explain complex technical concepts clearly to engineers across teams</li>
</ul>
<ul>
<li>Familiarity with TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, or AI/LLM tooling (Nice to have)</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K - $310K</Salaryrange>
      <Skills>PostgreSQL, horizontal scaling, sharding, partitioning, complex query tuning, backend language, web APIs, Apache Kafka, TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, AI/LLM tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma provides services to 70 million users and aims to scale to hundreds of millions.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/f672c729-457f-4143-80e9-363ddf8a0870</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>64780097-d2c</externalid>
      <Title>Software Engineer, Backend</Title>
      <Description><![CDATA[<p>You&#39;ll build and scale the backend systems that power millions of users creating content every day on Gamma. This role is about solving real distributed systems challenges at scale while maintaining the performance and reliability users expect from a modern AI-powered product. You&#39;ll work across the full stack, shipping features that directly impact how people create and share their ideas.</p>
<p>While this role is backend focused, you&#39;ll work across the entire product with our frontend, product, and design teams. Our full TypeScript stack is built on modern technologies including React, Node.js, PostgreSQL, Redis, and cutting-edge AI models.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Scale backend systems to hundreds of millions of users while maintaining high performance and availability</li>
<li>Build and optimize APIs that power real-time collaborative editing and AI content generation</li>
<li>Design and implement distributed systems that handle massive scale with reliability</li>
<li>Ship features across the full stack, working closely with frontend engineers to deliver polished experiences</li>
<li>Architect solutions for complex technical challenges in areas like data consistency, caching, and query optimization</li>
<li>Collaborate with product and design to turn ideas into production-ready features</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>3+ years building production backend systems with strong fundamentals in distributed systems, databases, and API design</li>
<li>Deep proficiency in TypeScript/Node.js or similar backend languages, with eagerness to work in our TypeScript stack</li>
<li>Experience scaling systems to handle millions of users and high throughput workloads</li>
<li>Strong understanding of PostgreSQL, Redis, or similar database technologies</li>
<li>Passion for building APIs, scaling complex systems, and creating excellent web applications</li>
<li>Curiosity and attitude that matches your technical knowledge</li>
<li>Prior experience working with websockets, streaming, or scaling inference workloads (Nice to have)</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $180K - $275K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $275K</Salaryrange>
      <Skills>TypeScript, Node.js, PostgreSQL, Redis, API design, Distributed systems, Database design, Websockets, Streaming, Inference workloads</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma is a modern AI-powered product with millions of users creating content every day.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/fb12356a-e868-4a4a-801c-882a6b0ac83f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5c31eece-8df</externalid>
      <Title>Senior Backend Engineer (AI), Pipeline Execution</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a key role in how we integrate AI into CI/CD workflows, working on features that improve performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>
<p>In this role, you&#39;ll go beyond using AI tools , you’ll design, build, and iterate on AI-assisted and agentic CI experiences. You’ll help define and implement patterns for how we measure success, how we instrument behaviour in production, and how we account for large language model limitations in real-world environments.</p>
<p>You&#39;ll also help integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with Engineering, Product, and UX partners to refine priorities: where we can move faster, where we’re missing data, and where there’s whitespace to innovate.</li>
</ul>
<ul>
<li>Contribute to defining what success looks like across our AI agents, ensuring we’re not just shipping, but learning from how features perform in production.</li>
</ul>
<ul>
<li>Keep a close eye on the competitive landscape and emerging AI-native DevOps tools, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>
</ul>
<p>Examples of Agentic CI work we have planned for the upcoming year:</p>
<ul>
<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>
</ul>
<ul>
<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>
</ul>
<ul>
<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what’s working, catch what isn’t, and iterate with confidence.</li>
</ul>
<ul>
<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>
</ul>
<p>What you’ll do:</p>
<ul>
<li>Design, build, and operate backend features that make GitLab CI fast, reliable, and easy to use at scale.</li>
</ul>
<ul>
<li>Implement AI-powered and agentic CI capabilities that integrate with GitLab’s Duo Agent Platform.</li>
</ul>
<ul>
<li>Instrument, monitor, and improve CI systems using data, observability, and safe rollout practices.</li>
</ul>
<ul>
<li>Write secure, well-tested Ruby on Rails code in our monolith, improving existing features while reducing technical debt.</li>
</ul>
<ul>
<li>Collaborate cross-functionally with Product, UX, and Infrastructure, mentoring others and raising engineering standards across the Verify stage.</li>
</ul>
<p>What you’ll bring:</p>
<ul>
<li>Strong Ruby on Rails backend experience in a large, production codebase.</li>
</ul>
<ul>
<li>In-depth experience building and operating AI-powered backend features in production.</li>
</ul>
<ul>
<li>A data- and observability-driven approach to diagnosing issues, improving reliability, and validating impact.</li>
</ul>
<ul>
<li>Clear written and verbal communication, with a collaborative, mentoring mindset in a remote, async environment.</li>
</ul>
<ul>
<li>Hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with AI agents or agentic frameworks (for example, LangChain or similar technologies) and building agentic workflows in production environments.</li>
</ul>
<ul>
<li>Strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Competitive salary and equity package</Salaryrange>
      <Skills>Ruby on Rails, AI-powered backend features, Data-driven approach, Observability, Safe rollout practices, PostgreSQL, CI/CD workflows, Agentic CI capabilities, LangChain, Agentic frameworks, Workflow orchestration, Infrastructure-heavy domains</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8514945002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US-Southeast</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c73333c8-f80</externalid>
      <Title>Software Engineer, Safeguards Foundations (Internal Tooling)</Title>
      <Description><![CDATA[<p>We are seeking a software engineer to join our Safeguards Foundations team, which builds the platforms, infrastructure, and internal tools that support the development of beneficial AI systems. As a software engineer on this team, you will design, build, and maintain internal review and enforcement tooling used by Safeguards analysts, including case queues, content review surfaces, decision/audit logging, and account-actioning workflows. You will work closely with Trust &amp; Safety operations, policy, and detection-engineering teams to turn messy operational workflows into well-designed, durable software.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain internal review and enforcement tooling used by Safeguards analysts</li>
<li>Understand user workflows and establish tooling for well processes that may be distributed across a number of tools and UIs</li>
<li>Develop the &#39;base layer&#39; of reusable APIs, data storage, and backend services that let new review workflows be stood up quickly and safely</li>
<li>Partner with operations and policy teams to understand reviewer pain points, then translate them into clear product improvements that reduce handling time and decision error</li>
<li>Integrate tooling with upstream detection systems and downstream enforcement infrastructure so that flagged behaviour flows cleanly from signal → human review → action</li>
<li>Build in the guardrails that sensitive internal tools require: granular permissions, audit trails, data-access controls, and reviewer wellbeing features (e.g. content blurring, exposure limits)</li>
<li>Instrument the tools you ship , surfacing metrics on queue health, reviewer throughput, and decision quality so the team can see what&#39;s working</li>
<li>Contribute to the Foundations team&#39;s shared platform and on-call responsibilities</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of experience as a software engineer, with meaningful time spent building internal tools, operations platforms, or back-office products</li>
<li>Comfortable using agentic coding tools (e.g. Claude Code) as a core part of your workflow, and can direct them to ship well-tested, production-quality software at a high cadence without lowering the bar</li>
<li>Take a product-minded approach to internal users: you work with the people using your tools, watch where they struggle, and fix it</li>
<li>Results-oriented, with a bias towards flexibility and impact</li>
<li>Pick up slack, even if it goes outside your job description</li>
<li>Communicate clearly with non-engineering stakeholders and can explain technical trade-offs to operations and policy partners</li>
<li>Care about the societal impacts of your work and want to apply your engineering skills directly to AI safety</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience building tooling in a trust &amp; safety, content moderation, fraud, integrity, or risk-operations setting</li>
<li>Experience designing case-management or workflow systems (queues, SLAs, escalation paths, audit logs)</li>
<li>Experience working with sensitive data and understanding the privacy, access-control, and reviewer-wellbeing considerations that come with it</li>
<li>Experience with GCP/AWS, Postgres/BigQuery, and CI/CD in a production environment</li>
<li>Experience using LLMs as a building block inside operational tools (e.g. assisted triage, summarisation, or classification in the review loop)</li>
</ul>
<p>Representative projects:</p>
<ul>
<li>Rebuilding the analyst review queue so cases are routed by severity and skill, with full decision history and one-click escalation</li>
<li>Shipping a unified account-investigation view that pulls signals from multiple detection systems into a single, permissioned surface</li>
<li>Adding content-obfuscation and exposure-tracking features to protect reviewers working with harmful material</li>
<li>Building an internal labelling tool that feeds high-quality ground truth back to the detection and research teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£255,000-£325,000 GBP</Salaryrange>
      <Skills>agentic coding tools, APIs, backend services, case queues, content review surfaces, decision/audit logging, account-actioning workflows, CI/CD, GCP/AWS, Postgres/BigQuery, LLMs, sensitive data, privacy, access-control, reviewer-wellbeing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191433008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5fade764-3db</externalid>
      <Title>Software Engineer, Java - Campaigns Growth</Title>
      <Description><![CDATA[<p>Job Description: The Client Engagement group helps Squarespace users grow their businesses by enabling them to create, target, and automate email campaigns. We power one-time email sends, automated messages triggered by customer actions, and audience segmentation, helping users deliver the right message to the right people at the right time.</p>
<p>We’re looking for engineers to help build and scale the systems behind campaign creation, audience management, segmentation, and automation. In this role, you’ll collaborate closely with product managers, designers, and other engineers to deliver reliable and intuitive features that help our users engage their audiences.</p>
<p>This is a hybrid role based in our Dublin office. You&#39;ll report to the Engineering Team Manager and work alongside a collaborative, high-performing team.</p>
<p>You&#39;ll Get To...</p>
<ul>
<li>Build: Low-latency, robust data pipelines to ingest, combine, and aggregate customer data from multiple internal and external sources, processing millions of data points every day.</li>
<li>Collaborate: Work with product managers to translate our goals into features that deliver value for our users.</li>
<li>Plan: Contribute to architecture discussions and design reviews, helping define how backend systems are built.</li>
<li>Own: Oversee features throughout the development lifecycle, from implementation to launch and maintenance.</li>
</ul>
<p>Who We&#39;re Looking For</p>
<ul>
<li>3+ years of experience building backend systems in a production environment.</li>
<li>Proficiency in Java, Kotlin, or another object-oriented programming language.</li>
<li>Strong understanding of system design, algorithms, and distributed systems concepts.</li>
<li>Experience with databases such as MongoDB, PostgreSQL, or similar technologies.</li>
<li>Strong communication skills and a collaborative mindset.</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li>Health insurance with 100% covered premiums for you, your spouse or partner and your dependent children including medical, dental, and vision</li>
<li>Life and Income Protection</li>
<li>Fertility and adoption benefits</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Pension benefits with employer match</li>
<li>Flexible paid time off</li>
<li>26 weeks paid maternity leave &amp; 12 weeks paid paternity leave</li>
<li>2 weeks paid family care leave</li>
<li>Education reimbursement</li>
<li>Employee donation match to community organizations</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
<li>Free lunch and snacks</li>
<li>Close proximity to cultural landmarks such as Dublin Castle and St. Patrick&#39;s Cathedral</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Object-Oriented Programming, System Design, Algorithms, Distributed Systems, MongoDB, PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It empowers millions of customers in more than 200 countries and territories with all the tools they need to create an online presence, build an audience, monetize, and scale their business.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7845953</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8b431912-3b8</externalid>
      <Title>Platform Staff Engineer- Universal Directory</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>We are seeking an experienced Staff Software Engineer to join Okta&#39;s Universal Directory Platform team within the Product Platform Pillar. The team serves as the intelligent core of the enterprise security fabric, maintaining the source of truth for all identity assets and their associated relationships.</p>
<p><strong>This role follows a hybrid model requiring two days per week in our San Francisco office.</strong></p>
<p>Opportunity</p>
<p>This position will be involved in the development, design, and maintenance of our highly performant, reliable, and scalable platform, which is critical for managing user lifecycles, groups, and memberships. The successful candidate will possess experience in building and deploying scalable, reliable software and infrastructure within a cloud environment.</p>
<p>What you’ll be doing</p>
<ul>
<li>Understand our identity management group codebase and development process: Jira, Technical Designs, Code Review, Testing, and Deployment.</li>
</ul>
<ul>
<li>Develop and implement frameworks and toolings for our Universal Directory Service platform.</li>
</ul>
<ul>
<li>Design and implement high-performance distributed scalable and fault-tolerant software components.</li>
</ul>
<ul>
<li>Quickly deliver high-quality bug fixes and handle customer-reported issues.</li>
</ul>
<ul>
<li>Conduct quality code reviews and automated testings.</li>
</ul>
<ul>
<li>Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work.</li>
</ul>
<p>What you’ll bring to the role</p>
<ul>
<li>The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure.</li>
</ul>
<ul>
<li>7+ years of Software Development in Java, preferably significant experiences with Hibernate and Spring Boot.</li>
</ul>
<ul>
<li>5+ years of development experience building services, internal tools and frameworks.</li>
</ul>
<ul>
<li>3+ years experience automating and deploying large scale production services in AWS, GCP or similar.</li>
</ul>
<ul>
<li>Hands-on understanding with the practical application of SQL databases.</li>
</ul>
<ul>
<li>Experience working with systems of scale ranging from monolithic applications to microservices.</li>
</ul>
<ul>
<li>Ability to work effectively with distributed teams and people of various backgrounds.</li>
</ul>
<p>And extra credit if you have experience in any of the following!</p>
<ul>
<li>B.S. Computer Science or equivalent</li>
</ul>
<ul>
<li>Experience with the Go programming language.</li>
</ul>
<ul>
<li>Experience of PostgreSQL, Docker and Kubernetes.</li>
</ul>
<ul>
<li>Experience working with Active Directory or Microsoft Azure.</li>
</ul>
<p>Below is the annual base salary range for candidates located in San Francisco Bay Area. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$243,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$194,000-$243,000 USD</Salaryrange>
      <Skills>Java, Hibernate, Spring Boot, AWS, GCP, SQL databases, Microservices, Go programming language, PostgreSQL, Docker, Kubernetes, Active Directory, Microsoft Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides a platform for organizations to securely manage identities and access to their resources.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7843765</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1f736004-9d0</externalid>
      <Title>Staff DevOps Engineer - PAM Core</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Okta is the identity standard. The Okta Identity Cloud is an independent and neutral platform that securely connects the right people to the right technologies at the right time. We help organisations do two things - secure and manage their extended enterprise, and transform their customers&#39; experiences.</p>
<p>With over 14,000 customers, 7,000+ app integrations, and well over 200 million registered users, we are only getting started.</p>
<p>The Okta Privileged Access Management (PAM) is an identity-centric approach to a common and critical privileged access use case. Our elegant Zero Trust architecture is purpose-built for the modern cloud and helps customers solve challenging security and operations pain points at scale.</p>
<p>We&#39;re looking for a staff-level Platform engineer to join a team of highly skilled and talented team players who&#39;re proud of what they own and deliver. Our elite team is fast, creative, and flexible; with a weekly release cycle and individual ownership, we expect great things from our engineers and reward them with stimulating new projects, new technologies, and the chance to have significant equity in a company that is changing the cloud computing landscape forever.</p>
<p>You Will: Core contributor to Okta’s FedRAMP initiative Work with engineering teams to design, develop and deliver cloud-based infrastructure projects on a modern tech stack (Kubernetes/EKS, RDS, DynamoDB, Kinesis, MKS, Redis, OpenSearch, Docker, Terraform on AWS) Drive evaluation, development, and rollout of new common microservices Operate, support, and upgrade shared services and frameworks. Scale these as their usage invariably grows along with Okta&#39;s business. Evaluate existing systems to evolve them for serving in specialised circumstances to support Okta&#39;s future business needs Conduct design and code reviews. Ascertain that proposed designs consider scale, redundancy, and multi-tenancy. Ensure high programming standards by writing unit and functional tests. Monitor, troubleshoot, and fix services and frameworks the team owns Evaluate system performance and resolve bottlenecks Provide technical guidance and mentorship to junior developers Collaborate with architects, QA, product owners, security and operations engineers</p>
<p>You Have: Immense passion for doing the right thing to help Okta&#39;s technology stay ahead of its anticipated business growth Solid technology chops in architecting, implementing, tuning, and debugging some of the largest cloud deployments in the enterprise world A good understanding of computer science fundamentals such as data structures and algorithms Bachelor&#39;s degree in computer science or equivalent; master&#39;s preferred 7+ years of extensive programming experience in a modern programming language like Go, Java, or C++ especially in backend services. Go is preferred. 4+ years experience working with PostgreSQL or equivalent relational database systems. Experience with designing and querying databases with optimisation in mind is a plus. Experience with Cloud fundamental building blocks like IaC, Observability, Secrets Management, CI / CD pipelines, secure coding practices, and compliance. Demonstrated experience of working with REST and thorough understanding of its fundamentals Experience with AWS, Redis, Elasticsearch / OpenSearch, Kinesis, Kafka, and Docker Knowledge of network security, authentication, and authorisation Demonstrably followed best software engineering principles Familiarity with Agile software development process</p>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P24954_3414076</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$194,000-$267,000 USD</Salaryrange>
      <Skills>Go, Java, C++, PostgreSQL, AWS, Redis, Elasticsearch/OpenSearch, Kinesis, Kafka, Docker, IaC, Observability, Secrets Management, CI/CD pipelines, secure coding practices, compliance, REST, network security, authentication, authorisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud computing company that provides identity management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7838282</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6922f25b-ddc</externalid>
      <Title>Staff Machine Learning Engineer, Developer Platform</Title>
      <Description><![CDATA[<p>We are looking for a Staff Machine Learning Engineer, Developer Platform to build the ranking and personalization systems that connect redditors with their next favorite game or app on Reddit. You will work closely with Product, Backend, Data Science, and Core Ranking/ML Platform teams to design and ship best-in-class ranking, retrieval, and experimentation systems that power discovery and re-engagement for Dev Platform experiences across feeds, surfaces, and notifications.</p>
<p>As a Staff Machine Learning Engineer, you will own problems end-to-end from framing objectives and defining signals, to training and deploying models, to designing experiments and reading results,not just tuning existing knobs on mature systems. You’ll help define the ranking strategy for Developer Platform, stand up new ML models and feedback loops where none exist today, and turn noisy behavioral data into clear wins for users, creators, and developers.</p>
<p>This is a high-impact, 0→1 role where your work will directly shape how interactive apps and games show up on one of the largest sites in the world. You’ll set technical direction for Dev Platform ranking, raise the bar on relevance and system reliability, and mentor other engineers as we scale a team of builders who value impact, personal growth, openness, and kindness.</p>
<p>Languages: Go, Python, C++ or any object-oriented programming language Libraries: Baseplate, GraphQL Databases: Redis, Postgres, Memcached Tools: Kubernetes, AWS</p>
<p>Requirements:</p>
<ul>
<li>8+ years of experience as a software engineer building large-scale distributed systems and/or data-intensive, ML-driven systems, using Go, Python, C++, or another object-oriented language.</li>
<li>Proven track record working on cross-functional product teams (PM, Design, DS, Eng) where you owned end-user outcomes, not just models or infra, and shipped features that moved core product metrics.</li>
<li>Experience designing and improving ML tooling and platforms: deployment and rollout, automation, experiment frameworks, system diagnosis, reproducibility, and ML monitoring/alerting.</li>
<li>Experience designing and implementing performant, stable, and efficient ML or ranking systems (recommendation, ads, search, feed, or similar high-throughput decision systems).</li>
<li>Strong organizational skills with the ability to prioritize, sequence, and de-risk work, keeping complex projects on schedule with a high attention to detail.</li>
<li>BS in Computer Science or a related technical field, or equivalent practical experience.</li>
<li>Comfortable with software engineering best practices: testing, code reviews, technical design docs, and clear documentation for other teams that depend on your systems.</li>
<li>Entrepreneurial mindset: you are self-directed, comfortable in ambiguity, and biased toward action in fast-paced environments. You like 0→1 building, iteration, and learning from experiments and failures.</li>
<li>Excellent communication skills: you collaborate effectively in a remote, cross-functional team, and can explain complex ML and ranking concepts to both technical and non-technical stakeholders.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>Go, Python, C++, Baseplate, GraphQL, Redis, Postgres, Memcached, Kubernetes, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website that allows users to submit, vote, and comment on content. It has over 121 million daily active unique visitors and is one of the internet&apos;s largest sources of information.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7848689</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0090f2b1-e91</externalid>
      <Title>Intermediate Backend Engineer, Database Automation (Ruby)</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer in the Database Automation team, you&#39;ll develop and enhance the frameworks, patterns, and tooling that keep GitLab&#39;s application datastores scalable, healthy, and safe across GitLab.com and thousands of self-managed instances.</p>
<p>You&#39;ll work closely with experienced engineers and cross-functional teams to build reliable backend features, learn best practices in data architecture and lifecycle management, and contribute to identifying and addressing performance improvements in our infrastructure.</p>
<p>Some examples of our projects:</p>
<ul>
<li>SQL Traffic Replay Tooling</li>
<li>Background Operations Framework</li>
</ul>
<p>In this role, you&#39;ll develop and iterate backend features and data frameworks that make it safe and efficient to work with data at scale across GitLab.com and self-managed deployments.</p>
<p>You&#39;ll work with product management, UX, frontend, infrastructure, software delivery, and analytics teams to design and ship high-performing, reliable solutions.</p>
<p>You&#39;ll review and improve database-related changes from other engineers and external contributors to ensure data integrity, safety, and performance across diverse environments.</p>
<p>You&#39;ll design, build, and maintain tooling and guardrails such as SQL traffic replay and background operations frameworks to proactively detect and remediate scalability, performance, and data health issues.</p>
<p>You&#39;ll research, design, and implement improvements to database performance, scalability, and data health, including areas like soft delete strategies and database migration testing.</p>
<p>You&#39;ll document database best practices, anti-patterns, and data architecture guidance so developers can make informed, consistent choices.</p>
<p>You&#39;ll develop solutions for database upgrade paths and migration strategies that maintain backwards compatibility while reducing downtime and operational friction for self-managed customers with diverse deployment configurations.</p>
<p>In this role, you&#39;ll succeed by shipping incremental improvements and, over time, building the capability to fully own larger pieces of work with shorter revision cycles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PostgreSQL, Ruby on Rails, Database performance tuning, Troubleshooting, Software design, Algorithms, Performance trade-offs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is the intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481029002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dbee541c-ce7</externalid>
      <Title>Software Engineer III, Community Builders</Title>
      <Description><![CDATA[<p>We are seeking a talented Backend Engineer to join our team. As a key contributor, you will be responsible for designing, developing, and maintaining backend application services, ensuring the performance, security, and scalability of our systems. You will work collaboratively with product managers, designers, data scientists, and other engineers to deliver high-quality products. Your responsibilities will include contributing to the full development cycle, writing design documents and code, and receiving valuable feedback on your work. You will continuously learn and improve your technical and non-technical abilities.</p>
<p>Technologies We Use Our teams leverage a diverse and modern technology stack. While specific technologies may vary by team, we generally work with:</p>
<p>Languages: Go, Python Frameworks: Spark, Kafka, Airflow Datastores: BigQuery, Redis, Cassandra, PostgreSQL Tools: Kubernetes, Docker</p>
<p>What We Are Looking For A Bachelor&#39;s degree or higher in a quantitative or computer science-related field. 2+ years of software engineering experience in a scalable computing environment. A passion for learning and adapting to new technologies. Strong communication and collaboration skills, with the ability to work effectively with diverse stakeholders. Entrepreneurial spirit. You are self-directed, innovative, and biased towards action in fast-paced environments. You love to build new things and thrive in ambiguity and can easily navigate failure.</p>
<p>Benefits Comprehensive Healthcare Benefits and Income Replacement Programs 401k with Employer Match Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support Family Planning Support Gender-Affirming Care Mental Health &amp; Coaching Benefits Flexible Vacation &amp; Paid Volunteer Time Off Generous Paid Parental Leave</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,200-$229,900 USD</Salaryrange>
      <Skills>Go, Python, Spark, Kafka, Airflow, BigQuery, Redis, Cassandra, PostgreSQL, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7767702</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bf1554e6-c64</externalid>
      <Title>Software Engineer, AI Agents</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Why This Role Matters</p>
<p>At Cloudflare, we&#39;re building industrial-scale AI agents that support customers directly. This isn&#39;t research theater. Your code will power real customer interactions from day one, at global scale. Cloudflare already has the parts. You will assemble Workers, Durable Objects, KV, R2, D1, Vectorize, Workers AI, AI Gateway, and the Agent SDK into real agents customers use every day.</p>
<p>Role Intent</p>
<p>Ship production agents on the Cloudflare stack. Build, deploy, learn, repeat. Your code is the front door for Cloudflare customers.</p>
<p>What You Will Do</p>
<ul>
<li>Build agents on Workers with Durable Objects for state and short term memory</li>
<li>Wire tools with the Agent SDK, MCP, and function calling</li>
<li>Use Vectorize, KV, R2, and D1 for semantic memory, cache, files, and config</li>
<li>Run models through Workers AI and AI Gateway; integrate third parties when needed</li>
<li>Create evals, guardrails, and audits. Measure, tune, re-ship fast</li>
<li>Build agents that summarize, propose fixes, and escalate cleanly to humans</li>
<li>Expose agent health and metrics in transparent dashboards. No mystery boxes</li>
<li>Integrate with queues and webhooks; publish events on Queues or Pub/Sub</li>
<li>Cut cost per case and time to first response. Prove it with data.</li>
<li>Take end to end ownership including on call for what you ship (with team support)</li>
<li>Design and maintain robust observability for distributed AI workflows, implementing structured logging and end-to-end tracing across async service boundaries to ensure visibility into agent reasoning and execution.</li>
<li>Architect security boundaries for agent-led operations; implementing secure credential handling, multi-layer approval gates, and fine-grained trust scoping for mutative actions.</li>
</ul>
<p>Must Have</p>
<ul>
<li>Demonstrated success shipping production systems. Repos and releases that show real work.</li>
<li>Strong in TypeScript or Rust on Workers. HTTP, queues, async, performance</li>
<li>Fluency with Durable Objects, KV or R2, and either D1 or Postgres</li>
<li>Hands on with model tooling. Prompt I/O, tool calling, evals, safety checks</li>
<li>Observability mindset. Logs, traces, metrics, redlines</li>
<li>Experience with a2a/multi-agent frameworks</li>
<li>Experience developing LLM evaluation frameworks; automated scoring systems, CI-integrated quality gates.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Workers AI, AI Gateway, and Vectorize in production</li>
<li>Salesforce or Service Cloud experience. Webhooks and case APIs</li>
<li>Security depth. Prompt injection protection, secrets detection, PII handling</li>
<li>OSS agent frameworks. Know what to borrow and what to throw away.</li>
</ul>
<p>How We Build</p>
<p>Align fast on what matters.</p>
<p>Divide and conquer. Own your piece.</p>
<p>Ship. Watch customers use it.</p>
<p>Learn and repeat.</p>
<p>Why Join Cloudflare in India?</p>
<p>Impact at global scale: Your code will serve Cloudflare&#39;s customers across every region. Tens of millions of Internet properties depend on us.</p>
<p>Work on the edge: Few companies give engineers the chance to build AI directly into an edge platform that runs in 300+ cities worldwide.</p>
<p>Career growth: As one of the early engineers in our India based AI team, you&#39;ll have visibility, leadership opportunities, and a direct hand in shaping Cloudflare&#39;s AI roadmap.</p>
<p>Culture of ownership: We believe in autonomy, accountability, and trust. Engineers here own outcomes, not just tickets.</p>
<p>Learn and grow fast: Collaborate with peers across Support, Product, Security, and AI Platform teams. We encourage knowledge sharing, mentorship, and continuous learning.</p>
<p>Interview Signal</p>
<p>Expect to demonstrate your ability to:</p>
<ul>
<li>Build a mini agent on Workers using the Agent SDK</li>
<li>Store session memory in Durable Objects</li>
<li>Add semantic recall with Vectorize</li>
<li>Ship behind a KV flag with traces and observability</li>
<li>Push to production fast and take ownership</li>
</ul>
<p>Team Mission</p>
<p>The Agent Tech team owns the end to end stack for customer facing agents on Cloudflare. Everything runs at the edge.</p>
<p>Core Stack: Workers, Durable Objects, KV, R2, D1, Queues, Pub/Sub, Vectorize, Workers AI, AI Gateway, Pages, Zero Trust.</p>
<p>Principles: Ship fast. Measure truth. Simplify relentlessly. Own outcomes.</p>
<p>Fraud Alert</p>
<p>Do not fall victim to recruitment fraud. Cloudflare never charges application fees or requires candidates to purchase third-party certifications or training as a condition of employment. All official communication comes strictly from @cloudflare.com email addresses.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, genetic information, or any other characteristic protected by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype></Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, Rust, Workers, HTTP, queues, async, performance, Durable Objects, KV, R2, Postgres, model tooling, prompt I/O, tool calling, evals, safety checks, observability mindset, logs, traces, metrics, redlines, a2a/multi-agent frameworks, LLM evaluation frameworks, automated scoring systems, CI-integrated quality gates</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7831810</Applyto>
      <Location>In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>61234903-9fa</externalid>
      <Title>Engineering Manager (Java or Typescript) - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>
<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>
<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>
<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>
<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>
</ul>
<ul>
<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>
</ul>
<ul>
<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>
</ul>
<ul>
<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>
</ul>
<ul>
<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>
</ul>
<ul>
<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>
</ul>
<ul>
<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>
</ul>
<ul>
<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>
</ul>
<ul>
<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>
</ul>
<ul>
<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>
</ul>
<ul>
<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>
</ul>
<ul>
<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>
</ul>
<ul>
<li>Take full responsibility for delivering impactful features to millions of users annually.</li>
</ul>
<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience building and implementing backend services and/or frontend applications.</li>
</ul>
<ul>
<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>
</ul>
<ul>
<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>
</ul>
<ul>
<li>Love for building world-class products with a great user experience.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search and booking services for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/1558189</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a90d311-fba</externalid>
      <Title>Full Stack Engineer - Equities Autocallables</Title>
      <Description><![CDATA[<p>This role is part of a global team responsible for enhancing and supporting a real-time trade capture platform that processes, normalizes, and enriches the firm&#39;s executions across multiple asset classes. The platform feeds executions into downstream systems including real-time P&amp;L, risk, and reporting.</p>
<p>The successful candidate will focus on a Private Credit buildout, with particular emphasis on equities and options, and on integrating with third-party platforms such as Murex and ION. They will design, develop, and maintain Java-based services that support a real-time trade capture platform for our autocallable buildout, and build and support Kafka-based streaming pipelines to process, normalize, and distribute trading and reference data to downstream systems.</p>
<p>Key responsibilities include collaborating closely with portfolio managers, traders, operations, and risk teams to understand requirements and translate them into robust technical solutions, contributing to the architecture and design of low-latency, high-availability components, including multithreaded and distributed systems, and monitoring, troubleshooting, and resolving production issues related to trading workflows, data integrity, and system performance.</p>
<p>We are looking for a highly skilled and experienced software engineer with a strong background in Java, Kafka, and front-end technologies using Typescript/Javascript, in this role you&#39;ll be using Angular. You should have a solid understanding of object-oriented design, design patterns, and multithreading in distributed systems, and hands-on experience with unit testing and integration testing frameworks and best practices.</p>
<p>In addition, you should be familiar with CI/CD pipeline (Jenkins) and DevOps tools/practices (e.g., Git, build tools, automated testing, deployment automation), experience with SQL databases such as Postgres and SQLServer, and comfort with modern IDEs and developer productivity tools; openness to using AI-assisted development tools and modern developer workflows.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Java, Kafka, Angular, Typescript, Postgres, SQLServer, Jenkins, Git, CI/CD pipeline, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global financial technology company that provides real-time trade capture platforms for various asset classes.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954367614</Applyto>
      <Location>Miami, Florida, United States of America · New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07c95966-8e7</externalid>
      <Title>Backend Developer - Host Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Host Experience department as a Backend Developer and become part of the team that brings new vacation rental properties to life on Holidu.</p>
<p>You&#39;ll be working at the heart of our property acquisition engine , where we take hosts from their very first sign-up all the way to their first booking, making that journey as fast and seamless as possible.</p>
<p>This team sits at a uniquely strategic intersection of product and growth. You will build and optimize the systems that every new host flows through: from onboarding and listing creation, to property configuration, content quality, and referral programs.</p>
<p>The work demands reliability and attention to detail , because the time between a host signing up and welcoming their first guest, and how well their property performs from day one, is directly shaped by the quality of what you build.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>
<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>
<li>Internal and external web applications written with ReactJS.</li>
<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>
<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>
<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>
<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and contribute to shaping the team&#39;s direction as you grow.</li>
<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A passion for great user experience and drive to deliver world-class products.</li>
<li>Early experience delivering product impact through engineering , you&#39;ve shipped things that real users depend on.</li>
<li>Experience with Java or Kotlin with Spring is a plus.</li>
<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>
<li>Familiarity with various API types and integration best practices.</li>
<li>Strong problem-solving skills and a team-oriented mindset.</li>
<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>
<li>A love for coding and building high-quality products that make a difference.</li>
<li>High motivation to learn and experiment with new technologies.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, Gradle, AWS, Kubernetes, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, connecting hosts with millions of guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2589679</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3b63dd5-0f6</externalid>
      <Title>Backend utvecklare</Title>
      <Description><![CDATA[<p>We are seeking an experienced backend developer to join our tech team. As a backend developer, you will be responsible for designing, developing, and maintaining the server-side of our applications and systems. You will work closely with our frontend developers, designers, and product owners to ensure a seamless integration between frontend and backend.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop scalable and efficient backend solutions for our digital platforms.</li>
<li>Write clean, readable, and reusable code.</li>
<li>Perform unit testing and debugging to ensure high quality and reliability.</li>
<li>Participate in technical discussions and contribute ideas to improve the product&#39;s performance and functionality.</li>
<li>Collaborate with frontend developers and other team members to ensure a smooth user experience.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience in backend development with a focus on web applications.</li>
<li>Good knowledge of programming languages such as Python, Java, or similar.</li>
<li>Experience working with frameworks such as Django, Flask, Spring, or similar.</li>
<li>Familiarity with database management systems such as MySQL, PostgreSQL, or similar.</li>
<li>Knowledge of API design and implementation.</li>
<li>Strong problem-solving skills and ability to work independently as well as in a team.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Attractive salary based on experience and competence.</li>
<li>Opportunity to work with exciting projects and the latest technology.</li>
<li>Flexible working hours and possibility of remote work.</li>
<li>Continuous professional development and opportunities for career growth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend development, web applications, Python, Java, Django, Flask, Spring, MySQL, PostgreSQL, API design, problem-solving, cloud services, AWS, Google Cloud, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Transportation</Industry>
      <Employername>Scandinavian Airlines</Employername>
      <Employerlogo>https://logos.yubhub.co/scandinavianairlines.teamtailor.com.png</Employerlogo>
      <Employerdescription>Scandinavian Airlines is an airline company that operates flights across the world.</Employerdescription>
      <Employerwebsite>https://scandinavianairlines.teamtailor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://scandinavianairlines.teamtailor.com/jobs/4882026-backend-utvecklare</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bcb4d82-b90</externalid>
      <Title>Working Student Backend Engineering (all genders)</Title>
      <Description><![CDATA[<p>You will be working as a Working Student in the Account Compliance &amp; Experience (ACE) team, which is responsible for delivering secure and seamless flows for account lifecycle, relationship, and compliance to customers.</p>
<p>As a Working Student, you will contribute to the development of new backend features across the ACE domain, assist with operational tasks, get hands-on with modern AI-assisted development, and support ongoing tech refactoring efforts.</p>
<p>You will work directly alongside senior engineers, take part in real product development, and gradually build ownership over meaningful parts of our codebase.</p>
<p>The ACE team works within Holidu&#39;s broader backend ecosystem, using Java/Kotlin with Spring Boot, PostgreSQL, Redis, and other data stores, as well as AWS services and Jenkins for CI/CD.</p>
<p>You will have the opportunity to attend team planning sessions, architecture discussions, and retrospectives, giving you a real window into how a senior engineering team operates in a high-growth company.</p>
<p>We offer a fair salary, impact, growth, community, flexibility, and fitness opportunities.</p>
<p>You will be required to work ~20 hours per week, with 1-2 days per week in the office in Munich.</p>
<p>You should be currently enrolled in a degree in Computer Science, Software Engineering, or a related field, have a solid understanding of object-oriented programming and basic software design principles, and some hands-on experience with Java or Kotlin.</p>
<p>You should also have familiarity with RESTful APIs and relational databases (SQL), a genuine curiosity for backend systems, and a product-minded attitude.</p>
<p>Excellent communication skills in English are required, and German is a plus but not required.</p>
<p>Bonus points if you have exposure to Spring Boot, cloud platforms (AWS), or any experience with identity/access management concepts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>working_student</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, PostgreSQL, Redis, AWS services, Jenkins, CI/CD, RESTful APIs, relational databases (SQL), cloud platforms (AWS), identity/access management concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides a host platform for property owners and managers.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2605407</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd7327f8-fcf</externalid>
      <Title>Staff Software Engineer, Full-Stack - Enterprise Gen AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a frontend-focused full-stack engineer to help build AI-powered applications that redefine enterprise workflows and push the boundaries of interactive AI. As a staff software engineer, you&#39;ll work on a mix of cutting-edge customer-facing AI applications and internal SaaS products. Our engineering team powers projects like TIME&#39;s Person of the Year AI experience, where our AI technology helped shape one of the most iconic features in media. You&#39;ll also contribute to Scale&#39;s GenAI Platform (SGP), a powerful system that enables businesses to build and deploy AI agents at scale.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Building and enhancing user-facing AI applications for major enterprise customers, including high-profile media and Fortune 500 companies</li>
<li>Developing and refining features for Scale&#39;s GenAI Platform, empowering businesses to build, deploy, and manage AI-driven agents</li>
<li>Designing, building, and optimizing polished, high-performance UIs using Next.js, React, TypeScript, and Tailwind</li>
<li>Working closely with product managers, designers, and AI/ML teams to create seamless, intuitive, and impactful user experiences</li>
<li>Integrating frontend applications with backend services, working with APIs, authentication systems, and cloud-based infrastructure</li>
</ul>
<p>In this role, you&#39;ll have the opportunity to shape the future of AI-powered user experiences, working on projects that impact millions of users while developing tools that empower businesses to deploy AI at scale.</p>
<p>The base salary range for this full-time position in our hub locations of San Francisco, New York, or Seattle is $248,400,$310,500 USD. Compensation packages at Scale include base salary, equity, and benefits. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,400—$310,500 USD</Salaryrange>
      <Skills>Next.js, React, TypeScript, Tailwind, AI/ML, APIs, Authentication systems, Cloud-based infrastructure, FastAPI, PostgreSQL, GraphQL, AWS, Azure, GCP, Data-rich web platforms, Interactive AI applications, Agent-based systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4529529005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14499a71-fa9</externalid>
      <Title>Software Engineer, Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
</ul>
<ul>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
</ul>
<ul>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
</ul>
<ul>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
</ul>
<ul>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
</ul>
<ul>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
</ul>
<ul>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
</ul>
<ul>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
</ul>
<ul>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
</ul>
<ul>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
</ul>
<ul>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
</ul>
<ul>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
</ul>
<ul>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4536653005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e355a4a3-c92</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modelling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p><strong>Automation &amp; Tooling</strong></p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p><strong>Operations &amp; Incident Response</strong></p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p><strong>Preferred/Bonus Qualifications</strong></p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437947</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8482d0fc-285</externalid>
      <Title>Senior Backend Engineer, Gitlab Delivery: Upgrades</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab reliably by building and maintaining the infrastructure, tooling, and automation behind our deployment options.</p>
<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (Get), and the GitLab Operator to make GitLab easier to deploy, more secure by default, and scalable across major cloud providers and a wide range of customer environments.</p>
<p>In this role, you&#39;ll partner closely with engineering teams and act as a bridge to customer needs, improving installation, upgrade, and day-to-day operations for production-grade GitLab deployments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Evolving Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support validated reference architectures for enterprise-scale deployments</li>
</ul>
<ul>
<li>Building automation pipelines and observability into deployment tooling to validate, test, and operate GitLab across Kubernetes and other self-managed environments</li>
</ul>
<p>You&#39;ll maintain and evolve the Omnibus GitLab package to support reliable, production-ready self-managed deployments, improving deployment stability, increasing upgrade success rates, and reducing escalation rates.</p>
<p>You&#39;ll develop and improve GitLab Helm Charts so core components integrate cleanly and scale across supported environments, reducing deployment friction, shortening time to deploy, and improving operational consistency at scale.</p>
<p>You&#39;ll enhance the GitLab Environment Toolkit (Get), validated reference architectures, and the GitLab Operator for secure, Kubernetes-native lifecycle management, improving reliability, strengthening security baselines, and accelerating adoption in customer environments.</p>
<p>You&#39;ll improve installation, upgrade, and operational workflows across deployment methods to create a consistent experience for self-managed customers, reducing operational overhead, lowering failure rates, and increasing consistency across deployment methods.</p>
<p>You&#39;ll partner with Security to address vulnerabilities and deliver secure defaults and configurations in the deployment stack, reducing exposure to vulnerabilities and improving baseline security across self-managed deployments.</p>
<p>You&#39;ll build and maintain automation and continuous integration and continuous delivery pipelines that validate and test Omnibus, Charts, Get, and the Operator, increasing release confidence, improving test coverage, and reducing regressions across deployment tooling.</p>
<p>You&#39;ll work closely with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features into deployment methods and keep them reliable, scalable, and aligned with customer needs, improving delivery readiness and reducing operational issues after release.</p>
<p>You&#39;ll guide architectural direction, mentor backend engineers, and contribute to the roadmap for self-managed delivery, improving technical quality, accelerating delivery effectiveness, and strengthening team execution.</p>
<p>You&#39;ll have experience operating backend services in production, including deployment, monitoring, and maintenance in Kubernetes- and Helm-based environments.</p>
<p>You&#39;ll have proficiency in Go for building observable and resilient services, with working knowledge of Ruby as a useful addition.</p>
<p>You&#39;ll have hands-on practice with infrastructure as code, including tools such as Terraform, and with managing infrastructure across cloud providers such as Google Cloud Platform, Amazon Web Services, or Microsoft Azure.</p>
<p>You&#39;ll have knowledge of database design, operations, and troubleshooting, especially for PostgreSQL in secure and scalable setups.</p>
<p>You&#39;ll have knowledge of secure, scalable, and reliable deployment practices, including service scaling and rollout strategies.</p>
<p>You&#39;ll have familiarity with observability tools and patterns such as Prometheus and Grafana to monitor system health and performance.</p>
<p>You&#39;ll have ability to work effectively in large codebases and coordinate across distributed, cross-functional teams using clear written communication.</p>
<p>You&#39;ll have openness to transferable experience from related backend or infrastructure roles, along with the ability to write user-focused documentation and implementation guides.</p>
<p>The Upgrades team is part of GitLab Delivery and focuses on helping self-managed customers run GitLab successfully in their own environments, from smaller deployments to large enterprise footprints.</p>
<p>We own deployment and operational tooling across our work on Omnibus GitLab, Helm Charts, Get, and the GitLab Operator, and we work as a globally distributed, all-remote group that works asynchronously with Site Reliability Engineering, Release, Security, and Development teams across regions.</p>
<p>We are focused on making self-managed GitLab easier to deploy, upgrade, secure, and operate at scale.</p>
<p>For more on how we work, see Team Handbook Page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Ruby, Terraform, Google Cloud Platform, Amazon Web Services, Microsoft Azure, PostgreSQL, Prometheus, Grafana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8463933002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>646a6426-386</externalid>
      <Title>Member of Technical Staff - X Money</Title>
      <Description><![CDATA[<p>We are seeking a talented Software Engineer to join our X Money team, focused on building a revolutionary global payment network that will serve over 600 million users and rival the world&#39;s largest financial institutions.</p>
<p>In this role, you will specialise in backend development, designing and optimising robust microservices to ensure scalability, security, and reliability. You will support full-stack efforts, collaborate with cross-functional teams on payments, fraud detection, and compliance initiatives, and contribute to the creation of a high-scale financial products platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and optimise microservices for high-concurrency transactions using Go, Postgres, and Kafka.</li>
<li>Collaborate on system architecture, testing, and monitoring to ensure uptime and performance.</li>
<li>Build internal tools using frontend technologies as needed to support operational efficiency.</li>
<li>Support the Technical Lead in risk mitigation and align with engineering, product, and compliance teams to drive project success.</li>
<li>Contribute to the development of secure, scalable systems for handling financial data and transactions.</li>
<li>Iterate rapidly on feedback to deliver high-quality solutions in a dynamic environment.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>5+ years of software engineering experience, with a strong focus on backend development.</li>
<li>Proficiency in Go or similar languages and experience with databases (e.g., Postgres) and streaming systems (e.g., Kafka).</li>
<li>Familiarity with building distributed systems for high-scale, low-latency environments.</li>
<li>Knowledge of handling secure financial data.</li>
<li>Ability to contribute to frontend development for internal tools when required.</li>
<li>Strong communication and problem-solving skills, with a collaborative mindset.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in fintech, payments, or regulatory frameworks (e.g., PCI-DSS, AML/KYC).</li>
<li>Prior work in a fast-paced, startup-like environment on greenfield projects.</li>
<li>Comfort navigating ambiguous requirements and iterating based on feedback.</li>
<li>Passion for leveraging AI to transform financial systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Postgres, Kafka, backend development, microservices, scalability, security, reliability, distributed systems, financial data, frontend development, fintech, payments, regulatory frameworks, PCI-DSS, AML/KYC, fast-paced environment, greenfield projects, AI transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5007310007</Applyto>
      <Location>Tokyo, JP</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>abf4ca4a-26d</externalid>
      <Title>Senior Software Engineer - Safety Experience</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Safety Experience team. As a key member of this team, you will design, build, and maintain product features and systems that prevent harmful activities while ensuring regulatory compliance. Your work will play a critical role in keeping our users safe, which is essential for our growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the development of highly-visible, user-facing products that protect our users.</li>
<li>Design, build, and deploy robust production APIs, backend services, and data pipelines to launch safety features at scale.</li>
<li>Collaborate cross-functionally with Product, Design, Policy, Data Science, ML, Legal, and T&amp;S Operations to create solutions that are both impactful and lovable.</li>
<li>Iterate on in-house tooling to supercharge our T&amp;S workflows.</li>
<li>Respond rapidly to the ever-evolving abuse and compliance landscape.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years experience writing Python and utilizing back-end API frameworks (Flask, Django).</li>
<li>5+ years experience developing front-end interfaces with JavaScript (React, TypeScript) for both web and mobile platforms.</li>
<li>Familiarity with databases such as Cassandra, Postgres, and ScyllaDB.</li>
<li>Demonstrated success leading end-to-end delivery of complex projects: breaking down ambiguity, coordinating rollouts, and aligning stakeholders.</li>
<li>Demonstrated ability to troubleshoot, debug, and test complex systems in a live, production environment.</li>
<li>Exceptional communication and collaboration skills, with the ability to work well with cross-functional partners, designers, and other engineers.</li>
<li>Experience using metrics and dashboards to make data-driven decisions and develop insightful reports.</li>
<li>Experience utilizing AI tools like Claude Code and Cursor to supercharge dev workflows</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience in the Safety or Anti-Abuse domain.</li>
<li>Experience analyzing and visualizing data using Datadog or Mode.</li>
<li>Familiarity with real-time streaming systems like Kafka or Pub-Sub.</li>
<li>Ability to contribute to offline analytics jobs and processes.</li>
<li>Experience building and operating mobile-client features on iOS and Android.</li>
<li>Exposure to lower-level languages such as Go, Rust, and Elixir.</li>
<li>A strong moral compass that drives you to protect users and do the right thing.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Python, Flask, Django, JavaScript, React, TypeScript, Cassandra, Postgres, ScyllaDB, Claude Code, Cursor, Datadog, Mode, Kafka, Pub-Sub, Go, Rust, Elixir</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform used by over 200 million people each month for various purposes, primarily gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8377133002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0540dd96-198</externalid>
      <Title>Senior Software Engineer - Query Engine, Database Internals - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join the Elasticsearch - Analytical Engine team. This globally-distributed, completely remote team of senior engineers is responsible for building new analytics capabilities in Elasticsearch&#39;s latest aggregation framework based on a completely new compute engine, and accessed via our new piped query language called ES|QL.</p>
<p>This is a senior software engineering role that covers the design and implementation of new features, enhancements to existing features, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable, and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>You&#39;ll be a full-time Elasticsearch contributor, building data-intensive new features and fixing intriguing bugs, all while making the code easier to understand. You are able to research what available data structures and algorithms work best to implement a new functionality or enhancement. Sometimes you&#39;ll need to implement a data structure or algorithm in the code base. And there will be times when you&#39;ll need to get close to the operating system and hardware.</li>
<li>You&#39;ll work with a globally distributed team of experienced engineers focused on the search and query (ES|QL) analytics capabilities of Elasticsearch. You&#39;ll get to work with the teams that build the UI to ensure a good user experience, and you&#39;ll get to work with the teams building solutions on top of these APIs</li>
<li>You&#39;ll be an expert in several areas of Elasticsearch, and everyone will turn to you when they have a question about them. You&#39;ll improve those areas based on your questions and your instincts.</li>
<li>You&#39;ll work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts, and sometimes handling them yourself.</li>
<li>You&#39;ll write idiomatic modern Java -- Elasticsearch is 99.8% Java!</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as newer features like lambdas.</li>
<li>You have experience with software systems engineering</li>
<li>You have a strong desire to optimize and make use of the most efficient data structures and algorithms.</li>
<li>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</li>
<li>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code, approaches, and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</li>
<li>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</li>
<li>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve built things with Elasticsearch before.</li>
<li>You’ve worked in the search and information retrieval space. You’re familiar with the data structures and algorithms associated with information retrieval.</li>
<li>You’ve worked on data storage technology or have experience building data analytics capabilities.</li>
<li>You have experience designing, leading and owning cross-functional initiatives.</li>
<li>You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component.</p>
<p>The typical starting salary range for new hires in this role is listed below. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>
<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being. The typical starting salary range for this role is:$133,100-$210,600 USDThe typical starting salary range for this role in the select locations listed above is:$159,900-$252,900 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$133,100-$210,600 USD</Salaryrange>
      <Skills>core Java, standard library of data structures and concurrency constructs, newer features like lambdas, software systems engineering, data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7723819</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>07626e74-020</externalid>
      <Title>Engineering Architect, Identity (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Auth0 secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Software Architect, Identity</strong></p>
<p><strong>The Engineering Architect Team</strong></p>
<p>The Architecture team is a small group of very senior engineers reporting to our VP of Engineering Excellence, working broadly across the organisation in collaboration with Engineering, Product, and Security. We partner deeply with other Engineering teams for large projects, and provide direction and architectural guidance for smaller initiatives. We have a dual-pronged charter to “level up the tech stack and level up the people stack” via both technical contributions and partnerships/mentoring.</p>
<p>In this role, you will have the opportunity to significantly contribute to Auth0’s future technology direction. Through your experience, knowledge of industry trends, and technical abilities you will provide guidance, build proof of concepts, and deliver production software implementations that help Auth0 Engineering teams move faster by using and developing standard patterns and technologies. You will also help advance the engineering culture and help uplevel other engineers. Note that while this role involves a lot of guidance, documentation, and leadership, it also requires substantial hands-on coding and development of both applications and systems.</p>
<p><strong>What you’ll be doing</strong></p>
<ul>
<li>Collaborate with Product, Security, and Engineering teams to define and continually improve Auth0’s technology stack and architecture.</li>
</ul>
<ul>
<li>Foster and lead innovation in the IAM space, with a strong focus on Agentic Identity</li>
</ul>
<ul>
<li>Lead initiatives to enhance, scale, and evolve Auth0’s product offerings.</li>
</ul>
<ul>
<li>Embed within Engineering teams across the organisation for large projects, while providing guidance and lighter touch engagements for smaller initiatives.</li>
</ul>
<ul>
<li>Design, architect, and document large scale distributed systems.</li>
</ul>
<ul>
<li>Lead the development of complex, broadly-scoped functionality in a very large and deep set of services and components.</li>
</ul>
<ul>
<li>Teach by doing: coding, optimising, and troubleshooting Node.js and Go applications in collaboration with feature development teams.</li>
</ul>
<ul>
<li>Implement features and create consistent foundations using technologies such as AWS, Azure, Node.js, Go, MongoDB, Redis, PostgreSQL, Kubernetes.</li>
</ul>
<ul>
<li>Investigate, understand, and resolve bottlenecks in our ability to scale, use resources efficiently, and maintain a 99.99% uptime SLA.</li>
</ul>
<ul>
<li>Drive technical decision making while striving to find the right balance between factors such as simplicity, flexibility, reliability, cost, and performance.</li>
</ul>
<ul>
<li>Participate in “round table” discussions and mentor team members and engineers throughout the organisation to level up our people.</li>
</ul>
<ul>
<li>Participate in our Engineering Leadership Team with other architects, directors, and executives.</li>
</ul>
<ul>
<li>You will join our Incident Commander on-call rotation. Members of our team do periodic on-call rotation for high-severity incidents to help up-level our responses After spending time getting acquainted with our applications, systems, and processes, and getting training to</li>
</ul>
<p><strong>What you’ll bring to the role</strong></p>
<ul>
<li>10+ years of software development experience.</li>
</ul>
<ul>
<li>5+ years of experience working on cloud applications.</li>
</ul>
<ul>
<li>Experience with API-first applications using REST and/or gRPC</li>
</ul>
<ul>
<li>Passion and thorough understanding of what it takes to build and operate secure, reliable systems at scale.</li>
</ul>
<ul>
<li>Knowledge of Identity Protocols such as OAuth, OIDC and SAML.</li>
</ul>
<ul>
<li>Industry knowledge of the Authorization and Authentication spaces.</li>
</ul>
<ul>
<li>Experience in building AI Agents, and/or MCP servers applications.</li>
</ul>
<ul>
<li>Experience with security engineering and application security.</li>
</ul>
<ul>
<li>Very strong written and verbal communication skills with a demonstrated ability to adjust your communication style to the intended audience, whether communicating with senior executives, customers, engineers, or product managers.</li>
</ul>
<ul>
<li>Mastery and deep understanding of hands-on software development building distributed systems.</li>
</ul>
<ul>
<li>Experience with multi-cloud environments and container deployments, particularly Kubernetes in AWS/Azure.</li>
</ul>
<ul>
<li>Prior experience with application performance management, tracing, and performance testing tools.</li>
</ul>
<ul>
<li>Excellence at creating clarity and alignment for technical initiatives.</li>
</ul>
<ul>
<li>Great ability to build trust through collaboration with multiple teams and get consensus on a vision.</li>
</ul>
<ul>
<li>Knowledge of application security and cloud security best practices.</li>
</ul>
<p>And extra credit if you have experience in any of the following!</p>
<ul>
<li>Deep experience in Node.js (Javascript or Typescript), or Golang.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$274,000-$370,000 USD</Salaryrange>
      <Skills>API-first applications, REST, gRPC, OAuth, OIDC, SAML, Authorization, Authentication, AI Agents, MCP servers, Security engineering, Application security, Cloud security best practices, Node.js, Go, AWS, Azure, MongoDB, Redis, PostgreSQL, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7128746</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>16599c27-a87</externalid>
      <Title>Senior Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>
<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>
<li>Flexible PTO to take the time you need, when you need it</li>
<li>Paid parental leave for all new parents welcoming a new child</li>
<li>Retirement savings plan to help you plan for the future</li>
<li>Remote work setup budget to help you create a productive home office</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced</li>
<li>In-office meal program and commuter benefits provided for onsite employees</li>
</ul>
<p>Compensation at Cresta:</p>
<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>
<p>OTE Range: $205,000–$270,000 + Offers Equity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$205,000–$270,000</Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center using AI and human intelligence.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5137153008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>de654376-b17</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a software company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p>Trusted by 65,000+ companies in 160+ countries, Carta’s platform of software and services lays the groundwork so you can build, invest, and scale with confidence.</p>
<p><strong>The Team You’ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We believe in hiring for Carta first, focusing on your core strengths and technical craft rather than a specific team’s immediate gap.</p>
<p>Staff Engineers at Carta are technical anchors for our business. You don’t just own features, you take long-term accountability for the technical health and strategic direction of different domains.</p>
<p><strong>The Problems You’ll Solve</strong></p>
<p>As a Staff Engineer, you are responsible for the long-term technical health and success of your business unit. You’ll work to:</p>
<ul>
<li>Navigate Ambiguity: Tackle the most complex and poorly defined problems at Carta, breaking them down into navigable paths for the rest of the organization.</li>
</ul>
<ul>
<li>Champion Systemic Improvement: Identify and eliminate failure patterns across multiple systems, driving architectural changes that improve scalability and reliability.</li>
</ul>
<ul>
<li>Bridge Technical Gaps: Use your deep understanding of cross-functional domains to align multiple teams on major technical decisions.</li>
</ul>
<ul>
<li>Define the AI Frontier: Lead the charge in transforming how we build by defining the context and building the rails that allow every person at Carta to leverage AI tools safely and effectively.</li>
</ul>
<ul>
<li>Uphold Engineering Standards: Set the vision for operational excellence and mentor senior engineers to raise the collective craft of the organization.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You are an expert in building distributed systems. While our primary stack is Python/Django, React, and Postgres, you should be comfortable guiding technical direction across JVM languages, gRPC, and cloud-native infrastructure (AWS).</li>
</ul>
<ul>
<li>Leadership: You lead through influence rather than authority, acting as a role model for constructive communication and technical discipline.</li>
</ul>
<ul>
<li>Vision: You don&#39;t just solve the problem in front of you; you anticipate future roadblocks and build systems that support long-term business growth.</li>
</ul>
<ul>
<li>Experience: We recommend 10+ years of professional software engineering experience with a track record of high-level technical leadership.</li>
</ul>
<p><strong>Salary</strong></p>
<p>Carta’s compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions plans. Our expected cash compensation (salary + commission if applicable) range for this role is: $205,600 - $257,000 CAD in Toronto, Ontario, Canada</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205,600 - $257,000 CAD</Salaryrange>
      <Skills>Python, Django, React, Postgres, JVM languages, gRPC, cloud-native infrastructure (AWS)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit, supporting 9,000+ funds and SPVs, representing nearly $185B in assets under management.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7656155003</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c55e6593-7f3</externalid>
      <Title>Senior Security Engineer (Product)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Security Engineer (Product) to join our Trust team at Headway. As an early member on the team, you&#39;ll have the unique opportunity to be the builder and driver of our dedicated, in-house product and application security engineering efforts. In this role, you will partner closely with our product and engineering teams to ensure that our application is designed and developed securely so that we can maintain and grow customers&#39; trust in Headway.</p>
<p>Some of the key responsibilities of this role include:</p>
<ul>
<li>Partnering with Product and Engineering to ensure that our application is designed and developed securely</li>
<li>Participating in the implementation efforts, doing security reviews, helping with product design decisions, auditing and surfacing vulnerabilities in our current products</li>
<li>Developing and improving our automated tooling to scale our application security capabilities and find potential code problems both before and after we deploy</li>
<li>Making the safe way, the easy way, by working on defining and building application guardrails so that developers can build securely by default</li>
<li>Assisting in ongoing security operations, including incident response, vulnerability management, penetration testing, security reviews, and other operational tasks to ensure that our security program is operating at a world-class level</li>
</ul>
<p>We use a variety of tools and technologies, including Cloud Security: Lacework, Languages: Python 3, TypeScript, Libraries: FastAPI, SQLAlchemy, React, Datastores: Postgres, Redis, Infrastructure: AWS (Fargate, ECS, S3, and more), Spark and Kafka, Monitoring: Datadog, PagerDuty, Version Control: Github, Vulnerability Management: Snyk, Semgrep</p>
<p>If you have 0 → 1 security experience, strong cross-functional experience, strong technical depth and breadth, thrive in ambiguity, innovation at scale, results-driven, and mission-driven, you&#39;ll be great for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$218,500 - $273,125</Salaryrange>
      <Skills>Cloud Security, Python 3, TypeScript, FastAPI, SQLAlchemy, React, Postgres, Redis, AWS, Datadog, PagerDuty, Github, Snyk, Semgrep</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Headway</Employername>
      <Employerlogo>https://logos.yubhub.co/headway.com.png</Employerlogo>
      <Employerdescription>Headway is a technology company that builds software for mental healthcare providers. It has grown into a diverse, national network of over 60,000 mental healthcare providers across all 50 states.</Employerdescription>
      <Employerwebsite>https://www.headway.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/headway/jobs/5717998004</Applyto>
      <Location>New York, New York, United States; San Francisco, California, United States; Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe235887-6b4</externalid>
      <Title>Senior Fullstack Product Software Engineer, DocSend</Title>
      <Description><![CDATA[<p>As a Senior Full-Stack Product Engineer on the Dropbox DocSend team, you will play a pivotal role in shaping the future of secure document management, sharing, and tracking.</p>
<p>Your responsibilities will revolve around developing and enhancing our product to deliver exceptional user experiences , working closely with cross-functional teams to turn innovative ideas into robust, scalable, and user-friendly features. You will also have the opportunity to drive high impact and have high ownership in a smaller, startup-like team.</p>
<p>We are focused on expanding our Virtual Data Room business by improving deal workflows and introducing AI-enabled features.</p>
<p>You will autonomously lead full-stack projects, making effective tradeoffs between technical requirements and business goals. You will act as a leader across the org with impact extending beyond the immediate team, driving cross-team initiatives and collaborating effectively with cross-functional teams, including product managers, designers, and other engineers.</p>
<p>You will set a high bar for quality and operational excellence, preemptively identifying and resolving technical risks, and championing best practices across the team through code and design reviews.</p>
<p>You will mentor teammates, providing actionable feedback to help teammates grow into the next level. You will participate in on-call rotations, which entails being available for calls during both core and non-core business hours, and debug customer issues using logs, metrics, and traces.</p>
<p>The ideal candidate will have 9+ years of experience in software engineering or related industry roles, a BS degree in Computer Science or related technical field involving coding, and demonstrated expertise in Ruby on Rails applications and React.</p>
<p>Preferred qualifications include familiarity with tools and languages used on the DocSend Engineering team, such as Typescript, GraphQL, HAML, and PostgreSQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,200-$274,300 USD</Salaryrange>
      <Skills>Ruby on Rails, React, Typescript, GraphQL, HAML, PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file-sharing services. It has a double-digit growth rate year over year.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7641558</Applyto>
      <Location>Remote - US: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e68e5c3b-1e2</externalid>
      <Title>Lakebase Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>
<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>
<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>
<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>
<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>
<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>
<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>
<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>
<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>
<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>
<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>
<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>
<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>
<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>
<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>
<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>
<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>
<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>
<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>
<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>
<p>Bachelor’s degree or equivalent practical experience.</p>
<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>
<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>
<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>
<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>
<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>
<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8449848002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3bf626c-40c</externalid>
      <Title>Senior Software Engineer II</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a software company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p>Trusted by 65,000+ companies in 160+ countries, Carta’s platform of software and services lays the groundwork so you can build, invest, and scale with confidence.</p>
<p><strong>The Team You’ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We’re excited to meet people who are energized by complex, ambiguous problems.</p>
<p><strong>The Problems You’ll Solve</strong></p>
<p>As a Senior Software Engineer II, you will lead technically complex projects and serve as a multiplier for your team.</p>
<ul>
<li>Drive Implementation: Lead the execution of complex technical projects, driving them from concept to production while maintaining high standards for performance and reliability.</li>
</ul>
<ul>
<li>Simplify Systems: Dig deep into our architecture to identify opportunities to simplify code and infrastructure, prioritizing changes that have a measurable business impact.</li>
</ul>
<ul>
<li>Leverage Modern Tooling: Use the best AI-assisted engineering tools to accelerate your workflow, improve code quality, and spend more of your time solving the high-level logic and unconventional problems.</li>
</ul>
<ul>
<li>Foster Growth: Act as a mentor and coach, raising the technical bar for your peers through diligent PR reviews and architectural guidance.</li>
</ul>
<ul>
<li>Collaborate Cross-Functionally: Partner with product and design to ensure we are building the right solution for the user, not just following a specification.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You have experience with (or a desire to learn) our core technologies: Python, Django, React, Postgres, and Kafka. We also utilize Java, gRPC, and AWS.</li>
</ul>
<ul>
<li>Execution: You can break down complex user stories into actionable tasks and execute them with minimal guidance.</li>
</ul>
<ul>
<li>Strategic Mindset: You understand the &#39;why&#39; behind your code and can articulate technical trade-offs to stakeholders.</li>
</ul>
<ul>
<li>Experience: We recommend 8+ years of professional software engineering experience for this level.</li>
</ul>
<p><strong>Salary</strong></p>
<p>Carta’s compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions plans.</p>
<p>Our expected cash compensation (salary + commission if applicable) range for this role is:</p>
<ul>
<li>$181,050 - $213,000 CAD in Toronto, Ontario, Canada</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,050 - $213,000 CAD</Salaryrange>
      <Skills>Python, Django, React, Postgres, Kafka, Java, gRPC, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7656149003</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1903386-87b</externalid>
      <Title>Staff Infrastructure Software Engineer (Kubernetes)</Title>
      <Description><![CDATA[<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>
<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>
<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>
<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>
<p>Automate operations and engineering.</p>
<p>Focus on automation so we can spend energy where it matters.</p>
<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>
<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>
<p>Deep proficiency with coding languages such as Golang or Python.</p>
<p>Deep familiarity with container-related security best practices.</p>
<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>
<p>Experience with GPU-enabled clusters is a bonus.</p>
<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>
<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>
<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>
<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>
<p>Production experience with database software such as PostgreSQL.</p>
<p>Experience with GitOps tooling such as Flux or Argo.</p>
<p>Experience with CI/CD such as GitHub Actions.</p>
<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>
<p>Compensation includes a base salary, equity, and a variety of benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower team members to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4535898008</Applyto>
      <Location>Germany (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26212e9e-5a8</externalid>
      <Title>Infrastructure Engineer/SRE</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>
<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>
<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>
<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>
<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>
<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>
<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>
<li>Deep proficiency with coding languages such as Golang or Python.</li>
<li>Deep familiarity with container-related security best practices.</li>
<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>
<li>Experience with GPU-enabled clusters is a bonus.</li>
<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>
<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>
<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>
<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>
<li>Production experience with database software such as PostgreSQL.</li>
<li>Experience with GitOps tooling such as Flux or Argo.</li>
<li>Experience with CI/CD such as GitHub Actions.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>
<li>Flexible vacation time to promote a healthy work-life blend.</li>
<li>Paid parental leave to support you and your family.</li>
</ul>
<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5113847008</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ace16925-ba7</externalid>
      <Title>Engineering Manager - Platform (FinHub)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking an experienced Engineering Manager to lead the Ledger team within the Product Foundations - Platform Product Group.</p>
<p>Ledger is one of the core services for Coinbase, responsible for processing transactions and managing the funds of our users.</p>
<p>As one of Coinbase&#39;s most mission-critical services, Ledger sits at the core of our platform, processing billions in transactions and securing the assets of millions of users.</p>
<p>Today, our scale and complexity of operations have far surpassed the original design of Ledger and fund management systems.</p>
<p>This presents a rare and exciting opportunity to rearchitect foundational infrastructure that will shape Coinbase&#39;s success for the next decade.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and manage engineering teams, to guide the development of features, services, and infrastructure.</li>
</ul>
<ul>
<li>Coach your direct reports to have a positive impact on the organization and support their career growth.</li>
</ul>
<ul>
<li>Implement the multi-year strategy for our team and collaborate with engineers, designers, product managers and senior leadership to turn our vision into a tangible roadmap every quarter.</li>
</ul>
<ul>
<li>Be a thoughtful technical voice within the team, aiding in diligent architectural decisions and fostering a culture of high-quality code and engineering processes.</li>
</ul>
<ul>
<li>Collaborate with Product and Engineering teams to ensure successful delivery and operation of multi-tenanted, distributed systems at scale.</li>
</ul>
<ul>
<li>Work closely with our talent organization to identify and recruit exceptional engineers who align with Coinbase&#39;s culture and will help the team scale.</li>
</ul>
<ul>
<li>Contribute to and take ownership of processes that drive engineering quality and meet our engineering SLAs.</li>
</ul>
<p>What We Look For In You:</p>
<ul>
<li>At least 7 years of experience in software engineering.</li>
</ul>
<ul>
<li>At least 2 years of engineering management experience.</li>
</ul>
<ul>
<li>You possess a strong understanding of what constitutes high-quality code and effective engineering practices.</li>
</ul>
<ul>
<li>An execution-focused mindset, capable of navigating through ambiguity and delivering results.</li>
</ul>
<ul>
<li>An ability to balance long-term strategic thinking with short-term planning.</li>
</ul>
<ul>
<li>Experience in creating, delivering, and operating multi-tenanted, distributed systems at scale.</li>
</ul>
<ul>
<li>You can be hands-on when needed – whether that’s writing/reviewing code or technical documents, participating in on-call rotations and leading incidents, or triaging/troubleshooting bugs.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human‑in‑the‑loop practices to deliver business‑ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Prior experience leading a Platform or similar domain team.</li>
</ul>
<ul>
<li>Experience designing and operating ledgering or trading systems at scale.</li>
</ul>
<ul>
<li>Experience with financial data, accounting systems, or high-precision transaction processing.</li>
</ul>
<ul>
<li>Experience with Go, Kubernetes, Postgres, or similar technologies.</li>
</ul>
<ul>
<li>You have gone through a rapid growth in your company (from startup to mid-size).</li>
</ul>
<ul>
<li>Crypto-forward experience, including familiarity with onchain activity such as interacting with Ethereum addresses, using ENS, and engaging with dApps or blockchain-based services.</li>
</ul>
<p>Job #: P76571</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$218,025-$256,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>software engineering, engineering management, high-quality code, effective engineering practices, execution-focused mindset, long-term strategic thinking, short-term planning, multi-tenanted, distributed systems, generative AI tools, copilots, LibreChat, Gemini, Glean, Platform or similar domain team, ledgering or trading systems, financial data, accounting systems, high-precision transaction processing, Go, Kubernetes, Postgres, similar technologies, rapid growth, crypto-forward experience, onchain activity, Ethereum addresses, ENS, dApps or blockchain-based services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7790065</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d4ebd626-2bf</externalid>
      <Title>Staff+ Software Engineer, Databases</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>
<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>
<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive the technical direction for database solutions used across Product and Research</li>
</ul>
<ul>
<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>
</ul>
<ul>
<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>
</ul>
<ul>
<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>
</ul>
<ul>
<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>
</ul>
<ul>
<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>
</ul>
<ul>
<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>
</ul>
<ul>
<li>Make critical build vs. buy decisions for database technologies</li>
</ul>
<p>You might be a good fit if you:</p>
<ul>
<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>
</ul>
<ul>
<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>
</ul>
<ul>
<li>Have successfully scaled databases through massive growth at high-growth companies</li>
</ul>
<ul>
<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>
</ul>
<ul>
<li>Excel at technical leadership and cross-functional collaboration</li>
</ul>
<ul>
<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>
</ul>
<ul>
<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>
</ul>
<ul>
<li>Experience building multi-cloud or hybrid cloud database solutions</li>
</ul>
<ul>
<li>Knowledge of database orchestration and automation at scale</li>
</ul>
<ul>
<li>Background at companies known for database excellence</li>
</ul>
<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>database architecture, OLTP systems, distributed database systems, database scaling, database performance optimization, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, vector databases, async job processing frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151069008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc54ed6c-ca0</externalid>
      <Title>Full-Stack Engineer, Core Services (Senior Level)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Full-Stack Engineer to join our Core Services team. As a senior-level engineer, you&#39;ll design, build, and optimise the core systems and management platforms that power the Instabase platform.</p>
<p>This is a high-impact role for a &#39;product-minded engineer&#39;. In our Core Services team, we treat our platform as a product. Because we operate with a lean team, you will have end-to-end ownership: from writing Product Requirement Documents (PRDs) and building the high-performance backend services and scalable infrastructure that support them.</p>
<p>Responsibilities:</p>
<ul>
<li>Full Stack Development: You will function as a product-minded engineer for our internal platform. This involves architecting secure infrastructure (Kubernetes, Docker) and backend services (Go, Python, PostgresDB), while also building the frontend interfaces (React, TS) to support features.</li>
</ul>
<ul>
<li>Developer Experience: Create the internal platforms and dashboards that improve developer velocity, reliability, and observability across the entire organisation.</li>
</ul>
<ul>
<li>Technical Leadership: Act as a technical leader who mentors junior engineers, contributes to the entire infrastructure codebase, and identifies root causes for critical system issues.</li>
</ul>
<p>About you:</p>
<ul>
<li>Education: BS, MS, or PhD in Computer Science, or equivalent experience in a technical field such as Physics or Mathematics.</li>
</ul>
<ul>
<li>Experience: 5+ years of professional software development experience with a strong foundation in CS fundamentals.</li>
</ul>
<ul>
<li>Backend Expertise: Proficiency in Go and Python, with a deep understanding of building scalable backend services and APIs.</li>
</ul>
<ul>
<li>Frontend Expertise: Strong experience with React, TypeScript, and JavaScript for building complex, data-rich web applications.</li>
</ul>
<ul>
<li>Infrastructure &amp; Orchestration: Proficiency with Docker, Kubernetes, and cloud infrastructure (AWS, GCP, or Azure).</li>
</ul>
<ul>
<li>Product Thinking &amp; UI Design: You are comfortable functioning as your own PM and Designer and write technical specs (PRDs) to define how users interact with infrastructure.</li>
</ul>
<ul>
<li>Communication: Excellent communication skills to represent technical and product decisions to the wider engineering team.</li>
</ul>
<p>Good to have:</p>
<ul>
<li>Experience with React Native for mobile or cross-platform applications.</li>
</ul>
<ul>
<li>Prior experience in a startup environment where you handled multi-functional responsibilities (Dev, PM, and Design).</li>
</ul>
<p>Compensation: The base salary range for this role is $190,000 to $205,000 + bonus, equity and US benefits.</p>
<p>US Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
</ul>
<ul>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
</ul>
<ul>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
</ul>
<ul>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
</ul>
<ul>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
</ul>
<ul>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
</ul>
<ul>
<li>Lunch on Us: Enjoy a lunch credit when you&#39;re in the office.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Instabase is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $205,000 + bonus, equity and US benefits</Salaryrange>
      <Skills>Go, Python, PostgresDB, Kubernetes, Docker, React, TypeScript, JavaScript, Cloud infrastructure (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase provides a platform for organisations to solve unstructured data problems using AI.
It has customers representing large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8186577002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>564289ba-f9b</externalid>
      <Title>Senior Fullstack Product Software Engineer, DocSend</Title>
      <Description><![CDATA[<p>As a Senior Full-Stack Product Engineer on the Dropbox DocSend team, you will play a pivotal role in shaping the future of secure document management, sharing, and tracking.</p>
<p>Your responsibilities will revolve around developing and enhancing our product to deliver exceptional user experiences , working closely with cross-functional teams to turn innovative ideas into robust, scalable, and user-friendly features. You will also have the opportunity to drive high impact and have high ownership in a smaller, startup-like team.</p>
<p>We are focused on expanding our Virtual Data Room business by improving deal workflows and introducing AI-enabled features.</p>
<p>Key responsibilities include:</p>
<p>Autonomously leading full-stack projects, making effective tradeoffs between technical requirements and business goals. Acting as a leader across the org with impact extending beyond the immediate team, driving cross-team initiatives and collaborating effectively with cross-functional teams, including product managers, designers, and other engineers. Setting a high bar for quality and operational excellence, preemptively identifying and resolving technical risks, and championing best practices across the team through code and design reviews. Mentoring teammates, providing actionable feedback to help teammates grow into the next level. Participating in on-call rotations, which entails being available for calls during both core and non-core business hours, and debugging customer issues using logs, metrics, and traces.</p>
<p>Requirements include:</p>
<p>9+ years of experience in software engineering or related industry roles. BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience. Demonstrated expertise in Ruby on Rails applications and React. Demonstrated success in developing and deploying large-scale web applications with a user-focused approach. Proven ability to thrive in agile, fast-paced environments, including comfort with continuous deployment practices and rapid iteration.</p>
<p>Preferred qualifications include familiarity with tools and languages used on the DocSend Engineering team, including Typescript, GraphQL, HAML, and PostgreSQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,400-$257,600 CAD</Salaryrange>
      <Skills>Ruby on Rails, React, Typescript, GraphQL, HAML, PostgreSQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox&apos;s fastest-growing business, with a double-digit growth rate year over year.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7641561</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5f2dbbff-10c</externalid>
      <Title>Principal Software Engineer - Search Relevance - Elasticsearch</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Elasticsearch - Search team. This globally-distributed team of expert engineers focuses on delivering a robust and feature-rich search experience, including contributing to improving the search experience in Lucene.</p>
<p>This is a principal software engineering role that focuses on enhancing the vector and keyword search functionality within Elasticsearch, covering the design and implementation of new search features, enhancements to existing search functionality, and resolving bugs.</p>
<p>Our company is distributed by intention. We hire the best engineers we can find wherever they are, whoever they are. We collaborate across continents every day over email, GitHub, Zoom, and Slack. At our best, we write fast, scalable and intuitive software. We believe that the best way to do that is to empower individual engineers, code review every change, decide big things by consensus, and strive for incremental improvements.</p>
<p><strong>What You Will Be Doing</strong></p>
<ul>
<li>Lead initiatives within Elasticsearch to produce an industry-leading search engine offering, supplying unparalleled speed and relevance in search.</li>
<li>Contribute to Elasticsearch full time, building new search features and fixing intriguing bugs, all while making the code easier to understand. Sometimes you&#39;ll need to invent a new algorithm or data structure. Or find one and implement it. Sometimes you&#39;ll need to get close to the operating system and hardware.</li>
<li>Work with a globally distributed team of experienced engineers focused on the search capabilities of Elasticsearch.</li>
<li>Be an expert on Elasticsearch search relevance. You&#39;ll identify and drive improvements in this area based on your questions and your instincts.</li>
<li>Work with community members from all over the world on issues and pull requests, sometimes triaging them and handing them off to other experts and sometimes handling them yourself.</li>
<li>Write idiomatic modern Java -- Elasticsearch is 99.8% Java!</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Professional experience with search and vector databases, and you used HNSW, IVF, or other relevant algorithms and libraries on search platforms at scale.</li>
<li>You have strong skills in core Java and are conversant in the standard library of data structures and concurrency constructs, as well as other features like lambdas.</li>
<li>You work with a high level of autonomy, and are able to take on projects and guide them from beginning to end. This covers both technical design and working with other engineers to develop needed components.</li>
<li>You&#39;re comfortable developing collaboratively. Giving and receiving feedback on code and approaches and APIs is hard! Bonus points if you&#39;ve collaborated over the internet because that&#39;s harder. Double bonus points for asynchronous collaboration over the internet. That&#39;s even harder, but we do it anyway because it&#39;s the best way we know how to build software.</li>
<li>You&#39;ve used several data storage technologies like Elasticsearch, Solr, PostgreSQL, MongoDB, or Cassandra and have some idea how they work and why they work that way.</li>
<li>You have excellent verbal and written communication skills. Like we said, collaborating on the internet is hard. We try to be respectful, empathetic, and trusting in all of our interactions. And we&#39;d expect that from you too.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>You&#39;ve built things with Elasticsearch before.</li>
<li>You&#39;ve worked with open source projects and are familiar with different styles of source control workflow and continuous integration.</li>
<li>You have experience designing, leading and owning cross-functional initiatives</li>
</ul>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is listed below.</p>
<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>
<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched Registered Retirement Savings Plan (RRSP) with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p>The typical starting salary range for this role is: $154,000-$243,600 CAD</p>
<p><strong>Additional Information - We Take Care of Our People</strong></p>
<p>As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
<li>Health coverage for you and your family in many locations</li>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
<li>Generous number of vacation days each year</li>
<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>
<li>Up to 40 hours each year to use toward volunteer projects you love</li>
<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,000-$243,600 CAD</Salaryrange>
      <Skills>Java, Search and vector databases, HNSW, IVF, Lucene, Elasticsearch, Solr, PostgreSQL, MongoDB, Cassandra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Its platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7699668</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>75874f94-6d7</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p><strong>The Team You’ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We believe in hiring for Carta first, focusing on your core strengths and technical craft rather than a specific team’s immediate gap. Staff Engineers at Carta are technical anchors for our business.</p>
<p><strong>The Problems You’ll Solve</strong></p>
<p>As a Staff Engineer, you are responsible for the long-term technical health and success of your business unit. You’ll work to:</p>
<ul>
<li>Navigate Ambiguity: Tackle the most complex and poorly defined problems at Carta, breaking them down into navigable paths for the rest of the organization.</li>
</ul>
<ul>
<li>Champion Systemic Improvement: Identify and eliminate failure patterns across multiple systems, driving architectural changes that improve scalability and reliability.</li>
</ul>
<ul>
<li>Bridge Technical Gaps: Use your deep understanding of cross-functional domains to align multiple teams on major technical decisions.</li>
</ul>
<ul>
<li>Define the AI Frontier: Lead the charge in transforming how we build by defining the context and building the rails that allow every person at Carta to leverage AI tools safely and effectively.</li>
</ul>
<ul>
<li>Uphold Engineering Standards: Set the vision for operational excellence and mentor senior engineers to raise the collective craft of the organization.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You are an expert in building distributed systems. While our primary stack is Python/Django, React, and Postgres, you should be comfortable guiding technical direction across JVM languages, gRPC, and cloud-native infrastructure (AWS).</li>
</ul>
<ul>
<li>Leadership: You lead through influence rather than authority, acting as a role model for constructive communication and technical discipline.</li>
</ul>
<ul>
<li>Vision: You don&#39;t just solve the problem in front of you; you anticipate future roadblocks and build systems that support long-term business growth.</li>
</ul>
<ul>
<li>Experience: We recommend 10+ years of professional software engineering experience with a track record of high-level technical leadership.</li>
</ul>
<p><strong>Salary</strong></p>
<p>Carta’s compensation package includes a market competitive salary, equity for all full time roles, exceptional benefits, and, for applicable roles, commissions plans. Our expected cash compensation (salary + commission if applicable) range for this role is: $205,600 - $257,000 CAD in Waterloo, Ontario, Canada</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$205,600 - $257,000 CAD</Salaryrange>
      <Skills>Python, Django, React, Postgres, JVM languages, gRPC, cloud-native infrastructure (AWS)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit. It supports 9,000+ funds and SPVs, representing nearly $185B in assets under management.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7656158003</Applyto>
      <Location>Waterloo, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9503a764-3c3</externalid>
      <Title>Staff Backend (Python) Engineer, AI Engineering:Duo Chat</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (Python) on the Duo Chat team in AI Engineering, you&#39;ll lead the backend architecture that powers GitLab Duo Chat across the GitLab DevSecOps platform.</p>
<p>You&#39;ll solve hard problems in building reliable, secure, and scalable AI-powered chat workflows so customers can plan, write, review, and secure code faster, with confidence.</p>
<p>This is a hands-on technical leadership role where you&#39;ll set direction for how we integrate and evolve large language model providers (including Google Vertex AI) across Ruby on Rails and Python services, raise the bar on observability and testing, and guide the team through ambiguous, high-impact technical decisions.</p>
<p>Over your first year, you&#39;ll be expected to drive key architectural choices, reduce technical debt that slows iteration, and help the team ship durable improvements to response quality, reliability, and maintainability.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Integrate new generative AI models and providers into GitLab Duo Chat to expand capabilities and improve response quality</li>
</ul>
<ul>
<li>Improve debugging, observability, and test coverage for AI-powered chat workflows to increase reliability at scale</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define the technical architecture and technical roadmap for the Duo Chat group, aligning backend execution with product direction and engineering priorities</li>
</ul>
<ul>
<li>Solve the highest-scope and most ambiguous backend problems, delivering secure, well-tested, performant solutions with minimal guidance</li>
</ul>
<ul>
<li>Integrate and extend generative AI capabilities in GitLab Duo Chat, including large language models (LLMs) and providers such as Google Vertex AI</li>
</ul>
<ul>
<li>Develop, ship, and maintain backend features across Python and Ruby on Rails services that power Duo Chat experiences across the GitLab platform</li>
</ul>
<ul>
<li>Design, implement, and review GraphQL application programming interface (API) contracts and supporting backend logic to ensure reliability, scalability, and clear frontend integrations</li>
</ul>
<ul>
<li>Improve observability, debugging workflows, and incident readiness by strengthening logging, tracing, and production troubleshooting practices</li>
</ul>
<ul>
<li>Drive code quality and long-term maintainability by setting internal standards, leading code reviews, and identifying and reducing technical debt</li>
</ul>
<ul>
<li>Mentor engineers across the team and participate in Tier 2 on-call rotations, contributing to root cause analysis and follow-up improvements to resiliency and testing (including RSpec)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Production experience building and operating backend services in Python, including background jobs, APIs, and data models</li>
</ul>
<ul>
<li>Ability to define and evolve technical architecture by weighing trade-offs, selecting patterns and tools, and setting a clear technical direction for others to follow</li>
</ul>
<ul>
<li>Experience setting and driving a technical roadmap in partnership with product and engineering stakeholders</li>
</ul>
<ul>
<li>Proficiency designing and maintaining REST and/or GraphQL APIs with attention to scalability, maintainability, and backward compatibility</li>
</ul>
<ul>
<li>Hands-on experience integrating large language models into applications, including prompt design and building features powered by generative AI</li>
</ul>
<ul>
<li>Strong SQL skills and experience working with relational databases such as PostgreSQL, including efficient queries and data modeling</li>
</ul>
<ul>
<li>Experience mentoring engineers through code review, architectural guidance, and shared standards, and communicating complex technical decisions in a clear, async-first way</li>
</ul>
<ul>
<li>Comfort contributing in a mature codebase across Python and Ruby on Rails, with openness to learning and applying transferable skills from related technologies or domains</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Duo Chat team sits within GitLab&#39;s AI Engineering organization and is responsible for building and evolving GitLab Duo Chat, the AI-powered chat experience embedded across the GitLab DevSecOps platform.</p>
<p>You&#39;ll work with a small, cross-functional group of backend, frontend, and AI specialists who collaborate asynchronously across time zones, using GitLab issues, merge requests, and documentation as the primary way of working.</p>
<p>The team focuses on integrating and scaling generative AI capabilities (including providers like Google Vertex AI), improving reliability and performance, and strengthening debugging, observability, and testing workflows so customers can safely use AI to plan, write, review, and secure their code across GitLab.</p>
<p><strong>How GitLab Supports Full-Time Employees</strong></p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Backend engineering, API design, GraphQL, Ruby on Rails, PostgreSQL, SQL, Large language models, Generative AI, Prompt design, Code review, Architectural guidance, Async-first communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8450446002</Applyto>
      <Location>Remote, Americas; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4920db00-eb9</externalid>
      <Title>Senior Backend Engineer (RoR), SSCS: Authorization</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the Authorization team at GitLab, you&#39;ll build and evolve the core systems that decide who can access what across the entire GitLab platform, directly impacting millions of users from startups to large enterprises.</p>
<p>You&#39;ll architect and implement our next-generation authorization infrastructure, including policy-as-code approaches, fine-grained permissions, and performance optimizations at massive scale, enabling GitLab&#39;s move toward zero-trust architecture while keeping authorization fast, secure, and correct.</p>
<p>You&#39;ll work closely with Security, Database, Platform, and authentication-focused teams to design and ship authorization capabilities that span GitLab&#39;s various deployment models and multi-tenant environments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Implementing fine-grained permissions for Job Tokens, Personal Access Tokens, and the GitLab Duo agent platform</li>
</ul>
<ul>
<li>Collaborating on Auth stack initiatives that evolve how authorization works across GitLab</li>
</ul>
<ul>
<li>Implement fine-grained permission systems for Job Tokens, Personal Access Tokens, the GitLab Duo Agent Platform, and other authentication mechanisms across the GitLab platform.</li>
</ul>
<ul>
<li>Collaborate with Security, Authentication, Database, and Platform teams on authorization stack initiatives, aligning designs and implementation plans.</li>
</ul>
<ul>
<li>Solve complex performance challenges in authorization, including query optimization, caching strategies, and database decomposition, with a focus on PostgreSQL.</li>
</ul>
<ul>
<li>Design and evolve authorization systems that work across multiple deployment models and multi-tenant architectures while maintaining security and reliability.</li>
</ul>
<ul>
<li>Drive improvements to authorization security, maintainability, and developer experience through code review, documentation, and technical leadership.</li>
</ul>
<ul>
<li>Contribute to architectural decisions for authorization features with a long-term strategic view, balancing immediate needs with future scalability.</li>
</ul>
<ul>
<li>Mentor and support other engineers in authorization patterns, policy-based access control, and secure coding practices in a fully remote, asynchronous environment.</li>
</ul>
<ul>
<li>Professional experience building and maintaining production applications with Ruby on Rails or similar backend frameworks.</li>
</ul>
<ul>
<li>Strong understanding of authorization models, including role-based access control, attribute-based access control, and fine-grained permission patterns.</li>
</ul>
<ul>
<li>Experience designing and optimizing high-scale backend systems, including PostgreSQL performance tuning, query optimization, and effective caching strategies.</li>
</ul>
<ul>
<li>Familiarity with or interest in policy-based authorization systems and modern policy languages such as Cedar or Rego.</li>
</ul>
<ul>
<li>Understanding of core security principles, including threat modeling, least-privilege access, and zero-trust architectures.</li>
</ul>
<ul>
<li>Experience working with distributed systems and service-to-service communication in a cloud or multi-tenant environment.</li>
</ul>
<ul>
<li>Demonstrated ability to own complex technical initiatives from design through production deployment in an asynchronous, remote setting.</li>
</ul>
<ul>
<li>Strong collaboration and communication skills, with openness to learning and applying transferable skills from adjacent domains or technologies.</li>
</ul>
<p>We on the Authorization team at GitLab design, build, and maintain the permission systems that control access across the GitLab platform, ensuring they are secure, scalable, and flexible for customers of all sizes.</p>
<p>We lead the ongoing evolution of our authorization architecture, with a focus on modern policy-as-code approaches, fine-grained access control, and support for initiatives like the evolving Auth stack.</p>
<p>We collaborate asynchronously across time zones and partner closely with Authentication, Product Security, Database, and Security teams to align on identity, data modeling, and threat modeling needs while iterating safely on core platform capabilities.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, Authorization models, Policy-based access control, Fine-grained permission patterns, Distributed systems, Service-to-service communication, Cloud or multi-tenant environment, Cedar or Rego policy languages, PostgreSQL performance tuning, Query optimization, Effective caching strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps that enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8457315002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6421dea-6e3</externalid>
      <Title>Strategic Hunter Account Executive - Lakebase</Title>
      <Description><![CDATA[<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>
<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>
<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>
<p><strong>The impact you will have</strong></p>
<ul>
<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>
</ul>
<ul>
<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>
</ul>
<ul>
<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>
</ul>
<ul>
<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>
</ul>
<ul>
<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>
</ul>
<ul>
<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>
</ul>
<ul>
<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>
</ul>
<ul>
<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>
</ul>
<ul>
<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>
</ul>
<p><strong>What success looks like in this role</strong></p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<ul>
<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>
</ul>
<ul>
<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>
</ul>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<ul>
<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>
</ul>
<ul>
<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>
</ul>
<ul>
<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>
</ul>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p><strong>What we look for</strong></p>
<ul>
<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>
</ul>
<ul>
<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>
</ul>
<ul>
<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>
</ul>
<ul>
<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>
</ul>
<ul>
<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>
</ul>
<ul>
<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>
</ul>
<ul>
<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>
</ul>
<ul>
<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>
</ul>
<ul>
<li>Bachelor’s degree or equivalent practical experience.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>
</ul>
<ul>
<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>
</ul>
<ul>
<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>
</ul>
<ul>
<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>
</ul>
<ul>
<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>
</ul>
<ul>
<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p><strong>Our Commitment to Diversity and Inclusion</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477547002</Applyto>
      <Location>Bengaluru, India; Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09e766cb-2a4</externalid>
      <Title>Software Engineer, Enterprise Integrations</Title>
      <Description><![CDATA[<p>Aboutfrica</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request.</p>
<p>Available Locations: Austin Texas</p>
<p>About the Department</p>
<p>Cloudflare&#39;s Enterprise Integrations Engineering Team designs, builds, and maintains integrations across a wide range of SaaS applications used throughout the organization. Our mission is to create scalable, reliable, and maintainable systems that ensure data flows securely and efficiently between platforms.</p>
<p>What You&#39;ll Do</p>
<p>We&#39;re looking for a software engineer to join our Enterprise Integrations Team. You&#39;ll work on building and maintaining integration workflows between Cloudflare and a variety of SaaS applications. This includes taking work from concept through implementation, including gathering requirements, writing technical specifications, development, testing, and deployment. You&#39;ll collaborate closely with internal teams to ensure integrations meet business needs and are built following engineering best practices. As you grow in the role, you&#39;ll have the opportunity to lead larger initiatives and own projects from end to end.</p>
<p>Qualifications &amp; Skills Required:</p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field, or equivalent work experience</li>
<li>Minimum of 5 years of professional experience as a software engineer</li>
<li>Experience working with internal stakeholders to solve business problems through integration solutions</li>
<li>Proficiency in Golang</li>
<li>Experience building RESTful APIs with proper service security practices</li>
<li>Experience working with observability tools such as Grafana, Prometheus, Sentry, or Kibana</li>
<li>Experience with Kubernetes</li>
<li>Experience with GitLab or other CI/CD tools</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience working with ERP systems such as Oracle or NetSuite</li>
<li>Experience working in an Agile Scrum environment</li>
<li>Familiarity with tools like Jira and Confluence</li>
<li>Familiarity with integration patterns such as pub/sub, CDM (Common Data Model), and batch processing</li>
<li>Experience working with PostgreSQL</li>
<li>Experience with Cloudflare Developer’s Platform</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, RESTful APIs, Observability tools, Kubernetes, GitLab, ERP systems, Agile Scrum, Jira, Confluence, Integration patterns, PostgreSQL, Cloudflare Developer’s Platform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7336735</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>533b470b-201</externalid>
      <Title>Finance Systems Engineer, Revenue</Title>
      <Description><![CDATA[<p>We are seeking a Finance Systems Engineer to join our Finance Systems team in San Francisco. In this hands-on engineering role, you will configure and extend the third-party platforms that run our financial operations including Zuora, Stripe, and Tesorio. You will write production Python, Node.js, and React code, author Workato recipes and API integrations across our SaaS stack, administer and tune the systems themselves, and ship working software,not manage vendors or write requirements documents.</p>
<p>You will work at the intersection of software engineering and finance, building and configuring the tools that allow our Accounting, Revenue Operations, and Order Management teams to operate efficiently, accurately, and in compliance with SOX and ASC 606 requirements.</p>
<p>The first thing you will inherit is our homegrown ledger application and the integrations that connect it to Workday, NetSuite, Zuora, Stripe, Tesorio, and Salesforce. From there, you will help us build the next generation of Finance tooling: self-serve workflows, automated reconciliation, and the operational surfaces that let Finance move at the speed the business demands.</p>
<p>If you thrive in fast-paced environments and enjoy building scalable financial infrastructure from the ground up, come join us in our mission to build safe, transformative AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Python, JavaScript/Node.js, React, Workato, API integrations, SOX compliance, ASC 606 revenue recognition, BigQuery, Postgres, MuleSoft, Zuora, CPQ, NetSuite, Workday, Stripe, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186669008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ab209e80-6b1</externalid>
      <Title>Senior Full Stack Product Software Engineer</Title>
      <Description><![CDATA[<p>As a Senior Full Stack Software Engineer at Dropbox, you will help design and develop the seamless, scalable, and user-friendly experiences Dropbox users depend on.</p>
<p>You will take ownership of key product areas, delivering end-to-end solutions that combine front-end user interfaces with robust back-end systems.</p>
<p>This year, Dropbox is on a mission to expedite the creation and implementation of AI-enabled products, providing a comprehensive technology stack for rapid prototyping and reliable deployment of AI-augmented functionality.</p>
<p>Responsibilities:</p>
<ul>
<li>Manage projects end-to-end: Lead initiatives from data discovery through design, implementation, and deployment.</li>
</ul>
<ul>
<li>Develop customer-centric prototypes: Create prototypes for new product explorations, focusing on user needs and feedback.</li>
</ul>
<ul>
<li>Proactively communicate: Share insights, progress, and outcomes with your team and leadership regularly.</li>
</ul>
<ul>
<li>Collaborate across teams: Foster strong relationships with other engineering teams and collaborate effectively with cross-functional partners within Dropbox.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of professional experience in full-stack development</li>
</ul>
<ul>
<li>BS degree or higher in Computer Science, a related field, or equivalent experience</li>
</ul>
<ul>
<li>Strong experience designing, developing, and scaling web applications</li>
</ul>
<ul>
<li>Expertise in front-end (JavaScript, React, Angular, HTML/CSS) and back-end (Node.js, Python) development</li>
</ul>
<ul>
<li>Familiarity with databases such as MySQL, PostgreSQL, or MongoDB</li>
</ul>
<p>Compensation:</p>
<p>Canada Pay Range $190,400-$257,600 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,400-$257,600 CAD</Salaryrange>
      <Skills>full-stack development, JavaScript, React, Angular, HTML/CSS, Node.js, Python, MySQL, PostgreSQL, MongoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file sharing services.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7536345</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b4d74f5-cf9</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>The Company You’ll Join</strong></p>
<p>Carta is a software company that connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</p>
<p><strong>The Team You&#39;ll Work With</strong></p>
<p>You’ll enter our engineering interview process as part of a pooled hiring model. We’re excited to meet people who are energized by complex, ambiguous problems. We look for owners and problem-solvers who are eager to dive into the details of their craft, and are motivated by building products and experiences that meaningfully expand access to ownership.</p>
<p><strong>The Problems You&#39;ll Solve</strong></p>
<p>As a Senior Software Engineer II, you will lead technically complex projects and serve as a multiplier for your team. You’ll work to:</p>
<ul>
<li>Drive Implementation: Lead the execution of complex technical projects, driving them from concept to production while maintaining high standards for performance and reliability.</li>
<li>Simplify Systems: Dig deep into our architecture to identify opportunities to simplify code and infrastructure, prioritizing changes that have a measurable business impact.</li>
<li>Leverage Modern Tooling: Use the best AI-assisted engineering tools to accelerate your workflow, improve code quality, and spend more of your time solving the high-level logic and unconventional problems</li>
<li>Foster Growth: Act as a mentor and coach, raising the technical bar for your peers through diligent PR reviews and architectural guidance.</li>
<li>Collaborate Cross-Functionally: Partner with product and design to ensure we are building the right solution for the user, not just following a specification.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>The Tech Stack: You have experience with (or a desire to learn) our core technologies: Python, Django, React, Postgres, and Kafka. We also utilize Java, gRPC, and AWS.</li>
<li>Execution: You can break down complex user stories into actionable tasks and execute them with minimal guidance.</li>
<li>Strategic Mindset: You understand the &#39;why&#39; behind your code and can articulate technical trade-offs to stakeholders.</li>
<li>Experience: We recommend 8+ years of professional software engineering experience for this level.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Django, React, Postgres, Kafka, Java, gRPC, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit. It supports 9,000+ funds and SPVs, representing nearly $185B in assets under management.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7652562003</Applyto>
      <Location>London, England, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>